{"note":"OpenAPI conversion -- returning structured metadata","name":"deepseek-chat","description":"DeepSeek Chat Completion API","version":"1.0.0","base_url":"https://api.deepseek.com","endpoints":1,"raw":"@lap v0.3\n# Machine-readable API spec. Each @endpoint block is one API call.\n@api DeepSeek Chat Completion API\n@base https://api.deepseek.com\n@version 1.0.0\n@endpoints 1\n@toc chat(1)\n\n@endpoint POST /chat/completions\n@desc Create Chat Completion\n@required {messages: [map{content!: str, role!: str}], model: str(deepseek-chat/deepseek-reasoner) # ID of the model to use.}\n@optional {frequency_penalty: num=0 # Positive values penalize new tokens based on their frequency  in the text, reducing repetition., max_tokens: int=4096 # The maximum number of tokens to generate., presence_penalty: num=0 # Positive values penalize new tokens that appear in the text, encouraging discussion of new topics., response_format: map{type: str} # Format of the response., stop: map # Sequence where the model stops generating tokens., stream: bool # Whether to stream responses as they are generated., stream_options: map # Options for streaming responses., temperature: num=1 # Controls randomness in generation (higher values = more random)., top_p: num=1 # Nucleus sampling parameter. Tokens are selected from the  top_p probability mass., tools: [map] # Tools available for the model to use., tool_choice: map # Configuration for tool selection., logprobs: bool # Whether to return log probabilities of output tokens., top_logprobs: int # Number of most likely tokens to return with log probabilities. Requires logprobs to be true.}\n@returns(200) {id: str, choices: [map], created: int(int64), model: str, system_fingerprint: str, object: str, usage: map} # Successful response.\n\n@end\n"}