{"files":{"SKILL.md":"---\nname: deepseek-chat-completion-api\ndescription: \"DeepSeek Chat Completion API skill. Use when working with DeepSeek Chat Completion for chat. Covers 1 endpoint.\"\nversion: 1.0.0\ngenerator: lapsh\n---\n\n# DeepSeek Chat Completion API\nAPI version: 1.0.0\n\n## Auth\nNo authentication required.\n\n## Base URL\nhttps://api.deepseek.com\n\n## Setup\n1. No auth setup needed\n2. Verify API access with a test request\n3. POST /chat/completions -- create first completion\n\n## Endpoints\n1 endpoint across 1 group. See references/api-spec.lap for full details.\n\n### Chat\n| Method | Path | Description |\n|--------|------|-------------|\n| POST | /chat/completions | Create Chat Completion |\n\n## Common Questions\nMatch user requests to endpoints in references/api-spec.lap. Key patterns:\n- \"Create a completion?\" -> POST /chat/completions\n\n## Response Tips\n- Check response schemas in references/api-spec.lap for field details\n- Create/update endpoints return the modified resource on success\n\n## References\n- Full spec: See references/api-spec.lap for complete endpoint details, parameter tables, and response schemas\n\n> Generated from the official API spec by [LAP](https://lap.sh)\n","references/api-spec.lap":"@lap v0.3\n# Machine-readable API spec. Each @endpoint block is one API call.\n@api DeepSeek Chat Completion API\n@base https://api.deepseek.com\n@version 1.0.0\n@endpoints 1\n@toc chat(1)\n\n@endpoint POST /chat/completions\n@desc Create Chat Completion\n@required {messages: [map{content!: str, role!: str}], model: str(deepseek-chat/deepseek-reasoner) # ID of the model to use.}\n@optional {frequency_penalty: num=0 # Positive values penalize new tokens based on their frequency  in the text, reducing repetition., max_tokens: int=4096 # The maximum number of tokens to generate., presence_penalty: num=0 # Positive values penalize new tokens that appear in the text, encouraging discussion of new topics., response_format: map{type: str} # Format of the response., stop: map # Sequence where the model stops generating tokens., stream: bool # Whether to stream responses as they are generated., stream_options: map # Options for streaming responses., temperature: num=1 # Controls randomness in generation (higher values = more random)., top_p: num=1 # Nucleus sampling parameter. Tokens are selected from the  top_p probability mass., tools: [map] # Tools available for the model to use., tool_choice: map # Configuration for tool selection., logprobs: bool # Whether to return log probabilities of output tokens., top_logprobs: int # Number of most likely tokens to return with log probabilities. Requires logprobs to be true.}\n@returns(200) {id: str, choices: [map], created: int(int64), model: str, system_fingerprint: str, object: str, usage: map} # Successful response.\n\n@end\n"}}