{"files":{"SKILL.md":"---\nname: mistral-ai-api\ndescription: \"Mistral AI API skill. Use when working with Mistral AI for models, conversations, agents. Covers 154 endpoints.\"\nversion: 1.0.0\ngenerator: lapsh\n---\n\n# Mistral AI API\nAPI version: 1.0.0\n\n## Auth\nBearer bearer\n\n## Base URL\nhttps://api.mistral.ai\n\n## Setup\n1. Set Authorization header with Bearer token\n2. GET /v1/models -- list models\n3. POST /v1/conversations -- create first conversation\n\n## Endpoints\n154 endpoints across 17 groups. See references/api-spec.lap for full details.\n\n### Models\n| Method | Path | Description |\n|--------|------|-------------|\n| GET | /v1/models | List Models |\n| GET | /v1/models/{model_id} | Retrieve Model |\n| DELETE | /v1/models/{model_id} | Delete Model |\n\n### Conversations\n| Method | Path | Description |\n|--------|------|-------------|\n| POST | /v1/conversations | Create a conversation and append entries to it. |\n| GET | /v1/conversations | List all created conversations. |\n| GET | /v1/conversations/{conversation_id} | Retrieve a conversation information. |\n| DELETE | /v1/conversations/{conversation_id} | Delete a conversation. |\n| POST | /v1/conversations/{conversation_id} | Append new entries to an existing conversation. |\n| GET | /v1/conversations/{conversation_id}/history | Retrieve all entries in a conversation. |\n| GET | /v1/conversations/{conversation_id}/messages | Retrieve all messages in a conversation. |\n| POST | /v1/conversations/{conversation_id}/restart | Restart a conversation starting from a given entry. |\n| POST | /v1/conversations/{conversation_id}#stream | Append new entries to an existing conversation. |\n| POST | /v1/conversations/{conversation_id}/restart#stream | Restart a conversation starting from a given entry. |\n\n### Agents\n| Method | Path | Description |\n|--------|------|-------------|\n| POST | /v1/agents | Create a agent that can be used within a conversation. |\n| GET | /v1/agents | List agent entities. |\n| GET | /v1/agents/{agent_id} | Retrieve an agent entity. |\n| PATCH | /v1/agents/{agent_id} | Update an agent entity. |\n| DELETE | /v1/agents/{agent_id} | Delete an agent entity. |\n| PATCH | /v1/agents/{agent_id}/version | Update an agent version. |\n| GET | /v1/agents/{agent_id}/versions | List all versions of an agent. |\n| GET | /v1/agents/{agent_id}/versions/{version} | Retrieve a specific version of an agent. |\n| PUT | /v1/agents/{agent_id}/aliases | Create or update an agent version alias. |\n| GET | /v1/agents/{agent_id}/aliases | List all aliases for an agent. |\n| DELETE | /v1/agents/{agent_id}/aliases | Delete an agent version alias. |\n| POST | /v1/agents/completions | Agents Completion |\n\n### Conversations#stream\n| Method | Path | Description |\n|--------|------|-------------|\n| POST | /v1/conversations#stream | Create a conversation and append entries to it. |\n\n### Files\n| Method | Path | Description |\n|--------|------|-------------|\n| POST | /v1/files | Upload File |\n| GET | /v1/files | List Files |\n| GET | /v1/files/{file_id} | Retrieve File |\n| DELETE | /v1/files/{file_id} | Delete File |\n| GET | /v1/files/{file_id}/content | Download File |\n| GET | /v1/files/{file_id}/url | Get Signed Url |\n\n### Fine_tuning\n| Method | Path | Description |\n|--------|------|-------------|\n| GET | /v1/fine_tuning/jobs | Get Fine Tuning Jobs |\n| POST | /v1/fine_tuning/jobs | Create Fine Tuning Job |\n| GET | /v1/fine_tuning/jobs/{job_id} | Get Fine Tuning Job |\n| POST | /v1/fine_tuning/jobs/{job_id}/cancel | Cancel Fine Tuning Job |\n| POST | /v1/fine_tuning/jobs/{job_id}/start | Start Fine Tuning Job |\n| PATCH | /v1/fine_tuning/models/{model_id} | Update Fine Tuned Model |\n| POST | /v1/fine_tuning/models/{model_id}/archive | Archive Fine Tuned Model |\n| DELETE | /v1/fine_tuning/models/{model_id}/archive | Unarchive Fine Tuned Model |\n\n### Batch\n| Method | Path | Description |\n|--------|------|-------------|\n| GET | /v1/batch/jobs | Get Batch Jobs |\n| POST | /v1/batch/jobs | Create Batch Job |\n| GET | /v1/batch/jobs/{job_id} | Get Batch Job |\n| POST | /v1/batch/jobs/{job_id}/cancel | Cancel Batch Job |\n\n### Chat\n| Method | Path | Description |\n|--------|------|-------------|\n| POST | /v1/chat/completions | Chat Completion |\n| POST | /v1/chat/moderations | Chat Moderations |\n| POST | /v1/chat/classifications | Chat Classifications |\n\n### Fim\n| Method | Path | Description |\n|--------|------|-------------|\n| POST | /v1/fim/completions | Fim Completion |\n\n### Embeddings\n| Method | Path | Description |\n|--------|------|-------------|\n| POST | /v1/embeddings | Embeddings |\n\n### Moderations\n| Method | Path | Description |\n|--------|------|-------------|\n| POST | /v1/moderations | Moderations |\n\n### Ocr\n| Method | Path | Description |\n|--------|------|-------------|\n| POST | /v1/ocr | OCR |\n\n### Classifications\n| Method | Path | Description |\n|--------|------|-------------|\n| POST | /v1/classifications | Classifications |\n\n### Audio\n| Method | Path | Description |\n|--------|------|-------------|\n| POST | /v1/audio/transcriptions | Create Transcription |\n| POST | /v1/audio/transcriptions#stream | Create Streaming Transcription (SSE) |\n| POST | /v1/audio/speech | Speech |\n| GET | /v1/audio/voices | List all voices |\n| POST | /v1/audio/voices | Create a new voice |\n| GET | /v1/audio/voices/{voice_id} | Get voice details |\n| PATCH | /v1/audio/voices/{voice_id} | Update voice metadata |\n| DELETE | /v1/audio/voices/{voice_id} | Delete a custom voice |\n| GET | /v1/audio/voices/{voice_id}/sample | Get voice sample audio |\n\n### Libraries\n| Method | Path | Description |\n|--------|------|-------------|\n| GET | /v1/libraries | List all libraries you have access to. |\n| POST | /v1/libraries | Create a new Library. |\n| GET | /v1/libraries/{library_id} | Detailed information about a specific Library. |\n| DELETE | /v1/libraries/{library_id} | Delete a library and all of it's document. |\n| PUT | /v1/libraries/{library_id} | Update a library. |\n| GET | /v1/libraries/{library_id}/documents | List documents in a given library. |\n| POST | /v1/libraries/{library_id}/documents | Upload a new document. |\n| GET | /v1/libraries/{library_id}/documents/{document_id} | Retrieve the metadata of a specific document. |\n| PUT | /v1/libraries/{library_id}/documents/{document_id} | Update the metadata of a specific document. |\n| DELETE | /v1/libraries/{library_id}/documents/{document_id} | Delete a document. |\n| GET | /v1/libraries/{library_id}/documents/{document_id}/text_content | Retrieve the text content of a specific document. |\n| GET | /v1/libraries/{library_id}/documents/{document_id}/status | Retrieve the processing status of a specific document. |\n| GET | /v1/libraries/{library_id}/documents/{document_id}/signed-url | Retrieve the signed URL of a specific document. |\n| GET | /v1/libraries/{library_id}/documents/{document_id}/extracted-text-signed-url | Retrieve the signed URL of text extracted from a given document. |\n| POST | /v1/libraries/{library_id}/documents/{document_id}/reprocess | Reprocess a document. |\n| GET | /v1/libraries/{library_id}/share | List all of the access to this library. |\n| PUT | /v1/libraries/{library_id}/share | Create or update an access level. |\n| DELETE | /v1/libraries/{library_id}/share | Delete an access level. |\n\n### Observability\n| Method | Path | Description |\n|--------|------|-------------|\n| POST | /v1/observability/chat-completion-events/search | Get Chat Completion Events |\n| POST | /v1/observability/chat-completion-events/search-ids | Alternative to /search that returns only the IDs and that can return many IDs at once |\n| GET | /v1/observability/chat-completion-events/{event_id} | Get Chat Completion Event |\n| GET | /v1/observability/chat-completion-events/{event_id}/similar-events | Get Similar Chat Completion Events |\n| GET | /v1/observability/chat-completion-fields | Get Chat Completion Fields |\n| GET | /v1/observability/chat-completion-fields/{field_name}/options | Get Chat Completion Field Options |\n| POST | /v1/observability/chat-completion-fields/{field_name}/options-counts | Get Chat Completion Field Options Counts |\n| POST | /v1/observability/chat-completion-events/{event_id}/live-judging | Run Judge on an event based on the given options |\n| POST | /v1/observability/judges | Create a new judge |\n| GET | /v1/observability/judges | Get judges with optional filtering and search |\n| GET | /v1/observability/judges/{judge_id} | Get judge by id |\n| DELETE | /v1/observability/judges/{judge_id} | Delete a judge |\n| PUT | /v1/observability/judges/{judge_id} | Update a judge |\n| POST | /v1/observability/judges/{judge_id}/live-judging | Run a saved judge on a conversation |\n| POST | /v1/observability/campaigns | Create and start a new campaign |\n| GET | /v1/observability/campaigns | Get all campaigns |\n| GET | /v1/observability/campaigns/{campaign_id} | Get campaign by id |\n| DELETE | /v1/observability/campaigns/{campaign_id} | Delete a campaign |\n| GET | /v1/observability/campaigns/{campaign_id}/status | Get campaign status by campaign id |\n| GET | /v1/observability/campaigns/{campaign_id}/selected-events | Get event ids that were selected by the given campaign |\n| POST | /v1/observability/datasets | Create a new empty dataset |\n| GET | /v1/observability/datasets | List existing datasets |\n| GET | /v1/observability/datasets/{dataset_id} | Get dataset by id |\n| DELETE | /v1/observability/datasets/{dataset_id} | Delete a dataset |\n| PATCH | /v1/observability/datasets/{dataset_id} | Patch dataset |\n| GET | /v1/observability/datasets/{dataset_id}/records | List existing records in the dataset |\n| POST | /v1/observability/datasets/{dataset_id}/records | Add a conversation to the dataset |\n| POST | /v1/observability/datasets/{dataset_id}/imports/from-campaign | Populate the dataset with a campaign |\n| POST | /v1/observability/datasets/{dataset_id}/imports/from-explorer | Populate the dataset with samples from the explorer |\n| POST | /v1/observability/datasets/{dataset_id}/imports/from-file | Populate the dataset with samples from an uploaded file |\n| POST | /v1/observability/datasets/{dataset_id}/imports/from-playground | Populate the dataset with samples from the playground |\n| POST | /v1/observability/datasets/{dataset_id}/imports/from-dataset | Populate the dataset with samples from another dataset |\n| GET | /v1/observability/datasets/{dataset_id}/exports/to-jsonl | Export to the Files API and retrieve presigned URL to download the resulting JSONL file |\n| GET | /v1/observability/datasets/{dataset_id}/tasks/{task_id} | Get status of a dataset import task |\n| GET | /v1/observability/datasets/{dataset_id}/tasks | List import tasks for the given dataset |\n| GET | /v1/observability/dataset-records/{dataset_record_id} | Get the content of a given conversation from a dataset |\n| DELETE | /v1/observability/dataset-records/{dataset_record_id} | Delete a record from a dataset |\n| POST | /v1/observability/dataset-records/bulk-delete | Delete multiple records from datasets |\n| POST | /v1/observability/dataset-records/{dataset_record_id}/live-judging | Run Judge on a dataset record based on the given options |\n| PUT | /v1/observability/dataset-records/{dataset_record_id}/payload | Update a dataset record conversation payload |\n| PUT | /v1/observability/dataset-records/{dataset_record_id}/properties | Update conversation properties |\n\n### Workflows\n| Method | Path | Description |\n|--------|------|-------------|\n| GET | /v1/workflows/executions/{execution_id} | Get Workflow Execution |\n| GET | /v1/workflows/executions/{execution_id}/history | Get Workflow Execution History |\n| POST | /v1/workflows/executions/{execution_id}/signals | Signal Workflow Execution |\n| POST | /v1/workflows/executions/{execution_id}/queries | Query Workflow Execution |\n| POST | /v1/workflows/executions/{execution_id}/terminate | Terminate Workflow Execution |\n| POST | /v1/workflows/executions/terminate | Batch Terminate Workflow Executions |\n| POST | /v1/workflows/executions/{execution_id}/cancel | Cancel Workflow Execution |\n| POST | /v1/workflows/executions/cancel | Batch Cancel Workflow Executions |\n| POST | /v1/workflows/executions/{execution_id}/reset | Reset Workflow |\n| POST | /v1/workflows/executions/{execution_id}/updates | Update Workflow Execution |\n| GET | /v1/workflows/executions/{execution_id}/trace/otel | Get Workflow Execution Trace Otel |\n| GET | /v1/workflows/executions/{execution_id}/trace/summary | Get Workflow Execution Trace Summary |\n| GET | /v1/workflows/executions/{execution_id}/trace/events | Get Workflow Execution Trace Events |\n| GET | /v1/workflows/executions/{execution_id}/stream | Stream |\n| GET | /v1/workflows/{workflow_name}/metrics | Get Workflow Metrics |\n| GET | /v1/workflows/runs | List Runs |\n| GET | /v1/workflows/runs/{run_id} | Get Run |\n| GET | /v1/workflows/runs/{run_id}/history | Get Run History |\n| GET | /v1/workflows/schedules | Get Schedules |\n| POST | /v1/workflows/schedules | Schedule Workflow |\n| DELETE | /v1/workflows/schedules/{schedule_id} | Unschedule Workflow |\n| GET | /v1/workflows/workers/whoami | Get Worker Info |\n| GET | /v1/workflows/events/stream | Get Stream Events |\n| GET | /v1/workflows/events/list | Get Workflow Events |\n| GET | /v1/workflows/deployments | List Deployments |\n| GET | /v1/workflows/deployments/{name} | Get Deployment |\n| GET | /v1/workflows/registrations | Get Workflow Registrations |\n| POST | /v1/workflows/{workflow_identifier}/execute | Execute Workflow |\n| POST | /v1/workflows/registrations/{workflow_registration_id}/execute | Execute Workflow Registration |\n| GET | /v1/workflows/{workflow_identifier} | Get Workflow |\n| PUT | /v1/workflows/{workflow_identifier} | Update Workflow |\n| GET | /v1/workflows/registrations/{workflow_registration_id} | Get Workflow Registration |\n| PUT | /v1/workflows/{workflow_identifier}/archive | Archive Workflow |\n| PUT | /v1/workflows/{workflow_identifier}/unarchive | Unarchive Workflow |\n\n## Common Questions\nMatch user requests to endpoints in references/api-spec.lap. Key patterns:\n- \"List all models?\" -> GET /v1/models\n- \"Get model details?\" -> GET /v1/models/{model_id}\n- \"Delete a model?\" -> DELETE /v1/models/{model_id}\n- \"Create a conversation?\" -> POST /v1/conversations\n- \"List all conversations?\" -> GET /v1/conversations\n- \"Get conversation details?\" -> GET /v1/conversations/{conversation_id}\n- \"Delete a conversation?\" -> DELETE /v1/conversations/{conversation_id}\n- \"List all history?\" -> GET /v1/conversations/{conversation_id}/history\n- \"List all messages?\" -> GET /v1/conversations/{conversation_id}/messages\n- \"Create a restart?\" -> POST /v1/conversations/{conversation_id}/restart\n- \"Create a agent?\" -> POST /v1/agents\n- \"Search agents?\" -> GET /v1/agents\n- \"Get agent details?\" -> GET /v1/agents/{agent_id}\n- \"Partially update a agent?\" -> PATCH /v1/agents/{agent_id}\n- \"Delete a agent?\" -> DELETE /v1/agents/{agent_id}\n- \"List all versions?\" -> GET /v1/agents/{agent_id}/versions\n- \"Get version details?\" -> GET /v1/agents/{agent_id}/versions/{version}\n- \"List all aliases?\" -> GET /v1/agents/{agent_id}/aliases\n- \"Create a conversations#stream?\" -> POST /v1/conversations#stream\n- \"Create a restart#stream?\" -> POST /v1/conversations/{conversation_id}/restart#stream\n- \"Create a file?\" -> POST /v1/files\n- \"Search files?\" -> GET /v1/files\n- \"Get file details?\" -> GET /v1/files/{file_id}\n- \"Delete a file?\" -> DELETE /v1/files/{file_id}\n- \"List all content?\" -> GET /v1/files/{file_id}/content\n- \"List all url?\" -> GET /v1/files/{file_id}/url\n- \"List all jobs?\" -> GET /v1/fine_tuning/jobs\n- \"Create a job?\" -> POST /v1/fine_tuning/jobs\n- \"Get job details?\" -> GET /v1/fine_tuning/jobs/{job_id}\n- \"Create a cancel?\" -> POST /v1/fine_tuning/jobs/{job_id}/cancel\n- \"Create a start?\" -> POST /v1/fine_tuning/jobs/{job_id}/start\n- \"Partially update a model?\" -> PATCH /v1/fine_tuning/models/{model_id}\n- \"Create a archive?\" -> POST /v1/fine_tuning/models/{model_id}/archive\n- \"Create a completion?\" -> POST /v1/chat/completions\n- \"Create a embedding?\" -> POST /v1/embeddings\n- \"Create a moderation?\" -> POST /v1/moderations\n- \"Create a ocr?\" -> POST /v1/ocr\n- \"Create a classification?\" -> POST /v1/classifications\n- \"Create a transcription?\" -> POST /v1/audio/transcriptions\n- \"Create a transcriptions#stream?\" -> POST /v1/audio/transcriptions#stream\n- \"Create a speech?\" -> POST /v1/audio/speech\n- \"List all voices?\" -> GET /v1/audio/voices\n- \"Create a voice?\" -> POST /v1/audio/voices\n- \"Get voice details?\" -> GET /v1/audio/voices/{voice_id}\n- \"Partially update a voice?\" -> PATCH /v1/audio/voices/{voice_id}\n- \"Delete a voice?\" -> DELETE /v1/audio/voices/{voice_id}\n- \"List all sample?\" -> GET /v1/audio/voices/{voice_id}/sample\n- \"List all libraries?\" -> GET /v1/libraries\n- \"Create a library?\" -> POST /v1/libraries\n- \"Get library details?\" -> GET /v1/libraries/{library_id}\n- \"Delete a library?\" -> DELETE /v1/libraries/{library_id}\n- \"Update a library?\" -> PUT /v1/libraries/{library_id}\n- \"Search documents?\" -> GET /v1/libraries/{library_id}/documents\n- \"Create a document?\" -> POST /v1/libraries/{library_id}/documents\n- \"Get document details?\" -> GET /v1/libraries/{library_id}/documents/{document_id}\n- \"Update a document?\" -> PUT /v1/libraries/{library_id}/documents/{document_id}\n- \"Delete a document?\" -> DELETE /v1/libraries/{library_id}/documents/{document_id}\n- \"List all text_content?\" -> GET /v1/libraries/{library_id}/documents/{document_id}/text_content\n- \"List all status?\" -> GET /v1/libraries/{library_id}/documents/{document_id}/status\n- \"List all signed-url?\" -> GET /v1/libraries/{library_id}/documents/{document_id}/signed-url\n- \"List all extracted-text-signed-url?\" -> GET /v1/libraries/{library_id}/documents/{document_id}/extracted-text-signed-url\n- \"Create a reprocess?\" -> POST /v1/libraries/{library_id}/documents/{document_id}/reprocess\n- \"List all share?\" -> GET /v1/libraries/{library_id}/share\n- \"Create a search?\" -> POST /v1/observability/chat-completion-events/search\n- \"Create a search-id?\" -> POST /v1/observability/chat-completion-events/search-ids\n- \"Get chat-completion-event details?\" -> GET /v1/observability/chat-completion-events/{event_id}\n- \"List all similar-events?\" -> GET /v1/observability/chat-completion-events/{event_id}/similar-events\n- \"List all chat-completion-fields?\" -> GET /v1/observability/chat-completion-fields\n- \"List all options?\" -> GET /v1/observability/chat-completion-fields/{field_name}/options\n- \"Create a options-count?\" -> POST /v1/observability/chat-completion-fields/{field_name}/options-counts\n- \"Create a live-judging?\" -> POST /v1/observability/chat-completion-events/{event_id}/live-judging\n- \"Create a judge?\" -> POST /v1/observability/judges\n- \"Search judges?\" -> GET /v1/observability/judges\n- \"Get judge details?\" -> GET /v1/observability/judges/{judge_id}\n- \"Delete a judge?\" -> DELETE /v1/observability/judges/{judge_id}\n- \"Update a judge?\" -> PUT /v1/observability/judges/{judge_id}\n- \"Create a campaign?\" -> POST /v1/observability/campaigns\n- \"Search campaigns?\" -> GET /v1/observability/campaigns\n- \"Get campaign details?\" -> GET /v1/observability/campaigns/{campaign_id}\n- \"Delete a campaign?\" -> DELETE /v1/observability/campaigns/{campaign_id}\n- \"List all selected-events?\" -> GET /v1/observability/campaigns/{campaign_id}/selected-events\n- \"Create a dataset?\" -> POST /v1/observability/datasets\n- \"Search datasets?\" -> GET /v1/observability/datasets\n- \"Get dataset details?\" -> GET /v1/observability/datasets/{dataset_id}\n- \"Delete a dataset?\" -> DELETE /v1/observability/datasets/{dataset_id}\n- \"Partially update a dataset?\" -> PATCH /v1/observability/datasets/{dataset_id}\n- \"List all records?\" -> GET /v1/observability/datasets/{dataset_id}/records\n- \"Create a record?\" -> POST /v1/observability/datasets/{dataset_id}/records\n- \"Create a from-campaign?\" -> POST /v1/observability/datasets/{dataset_id}/imports/from-campaign\n- \"Create a from-explorer?\" -> POST /v1/observability/datasets/{dataset_id}/imports/from-explorer\n- \"Create a from-file?\" -> POST /v1/observability/datasets/{dataset_id}/imports/from-file\n- \"Create a from-playground?\" -> POST /v1/observability/datasets/{dataset_id}/imports/from-playground\n- \"Create a from-dataset?\" -> POST /v1/observability/datasets/{dataset_id}/imports/from-dataset\n- \"List all to-jsonl?\" -> GET /v1/observability/datasets/{dataset_id}/exports/to-jsonl\n- \"Get task details?\" -> GET /v1/observability/datasets/{dataset_id}/tasks/{task_id}\n- \"List all tasks?\" -> GET /v1/observability/datasets/{dataset_id}/tasks\n- \"Get dataset-record details?\" -> GET /v1/observability/dataset-records/{dataset_record_id}\n- \"Delete a dataset-record?\" -> DELETE /v1/observability/dataset-records/{dataset_record_id}\n- \"Create a bulk-delete?\" -> POST /v1/observability/dataset-records/bulk-delete\n- \"Get execution details?\" -> GET /v1/workflows/executions/{execution_id}\n- \"Create a signal?\" -> POST /v1/workflows/executions/{execution_id}/signals\n- \"Create a query?\" -> POST /v1/workflows/executions/{execution_id}/queries\n- \"Create a terminate?\" -> POST /v1/workflows/executions/{execution_id}/terminate\n- \"Create a reset?\" -> POST /v1/workflows/executions/{execution_id}/reset\n- \"Create a update?\" -> POST /v1/workflows/executions/{execution_id}/updates\n- \"List all otel?\" -> GET /v1/workflows/executions/{execution_id}/trace/otel\n- \"List all summary?\" -> GET /v1/workflows/executions/{execution_id}/trace/summary\n- \"List all events?\" -> GET /v1/workflows/executions/{execution_id}/trace/events\n- \"List all stream?\" -> GET /v1/workflows/executions/{execution_id}/stream\n- \"List all metrics?\" -> GET /v1/workflows/{workflow_name}/metrics\n- \"Search runs?\" -> GET /v1/workflows/runs\n- \"Get run details?\" -> GET /v1/workflows/runs/{run_id}\n- \"List all schedules?\" -> GET /v1/workflows/schedules\n- \"Create a schedule?\" -> POST /v1/workflows/schedules\n- \"Delete a schedule?\" -> DELETE /v1/workflows/schedules/{schedule_id}\n- \"List all whoami?\" -> GET /v1/workflows/workers/whoami\n- \"List all list?\" -> GET /v1/workflows/events/list\n- \"List all deployments?\" -> GET /v1/workflows/deployments\n- \"Get deployment details?\" -> GET /v1/workflows/deployments/{name}\n- \"List all registrations?\" -> GET /v1/workflows/registrations\n- \"Create a execute?\" -> POST /v1/workflows/{workflow_identifier}/execute\n- \"Get workflow details?\" -> GET /v1/workflows/{workflow_identifier}\n- \"Update a workflow?\" -> PUT /v1/workflows/{workflow_identifier}\n- \"Get registration details?\" -> GET /v1/workflows/registrations/{workflow_registration_id}\n- \"How to authenticate?\" -> See Auth section above\n\n## Response Tips\n- Check response schemas in references/api-spec.lap for field details\n- Paginated endpoints accept limit/offset or cursor parameters\n- Create/update endpoints return the modified resource on success\n- Error responses include status codes and descriptions in the spec\n\n## References\n- Full spec: See references/api-spec.lap for complete endpoint details, parameter tables, and response schemas\n\n> Generated from the official API spec by [LAP](https://lap.sh)\n","references/api-spec.lap":"@lap v0.3\n# Machine-readable API spec. Each @endpoint block is one API call.\n@api Mistral AI API\n@base https://api.mistral.ai\n@version 1.0.0\n@auth Bearer bearer\n@endpoints 71\n@hint download_for_search\n@toc models(3), conversations(10), agents(11), conversations#stream(1), files(6), fine_tuning(8), batch(4), chat(3), fim(1), embeddings(1), moderations(1), ocr(1), classifications(1), audio(2), libraries(18)\n\n@group models\n@endpoint GET /v1/models\n@desc List Models\n@returns(200) {object: str, data: [any]} # Successful Response\n\n@endpoint GET /v1/models/{model_id}\n@desc Retrieve Model\n@required {model_id: str # The ID of the model to retrieve.}\n@returns(200) Successful Response\n@errors {422: Validation Error}\n\n@endpoint DELETE /v1/models/{model_id}\n@desc Delete Model\n@required {model_id: str # The ID of the model to delete.}\n@returns(200) {id: str, object: str, deleted: bool} # Successful Response\n@errors {422: Validation Error}\n\n@endgroup\n\n@group conversations\n@endpoint POST /v1/conversations\n@desc Create a conversation and append entries to it.\n@returns(200) {object: str, conversation_id: str, outputs: [any], usage: map{prompt_tokens: int, completion_tokens: int, total_tokens: int, connector_tokens: any, connectors: any}} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint GET /v1/conversations\n@desc List all created conversations.\n@optional {page: int=0, page_size: int=100, metadata: any}\n@returns(200) Successful Response\n@errors {422: Validation Error}\n\n@endpoint GET /v1/conversations/{conversation_id}\n@desc Retrieve a conversation information.\n@required {conversation_id: str # ID of the conversation from which we are fetching metadata.}\n@returns(200) Successful Response\n@errors {422: Validation Error}\n\n@endpoint DELETE /v1/conversations/{conversation_id}\n@desc Delete a conversation.\n@required {conversation_id: str # ID of the conversation from which we are fetching metadata.}\n@returns(204) Successful Response\n@errors {422: Validation Error}\n\n@endpoint POST /v1/conversations/{conversation_id}\n@desc Append new entries to an existing conversation.\n@required {conversation_id: str # ID of the conversation to which we append entries.}\n@returns(200) {object: str, conversation_id: str, outputs: [any], usage: map{prompt_tokens: int, completion_tokens: int, total_tokens: int, connector_tokens: any, connectors: any}} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint GET /v1/conversations/{conversation_id}/history\n@desc Retrieve all entries in a conversation.\n@required {conversation_id: str # ID of the conversation from which we are fetching entries.}\n@returns(200) {object: str, conversation_id: str, entries: [any]} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint GET /v1/conversations/{conversation_id}/messages\n@desc Retrieve all messages in a conversation.\n@required {conversation_id: str # ID of the conversation from which we are fetching messages.}\n@returns(200) {object: str, conversation_id: str, messages: [any]} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint POST /v1/conversations/{conversation_id}/restart\n@desc Restart a conversation starting from a given entry.\n@required {conversation_id: str # ID of the original conversation which is being restarted.}\n@returns(200) {object: str, conversation_id: str, outputs: [any], usage: map{prompt_tokens: int, completion_tokens: int, total_tokens: int, connector_tokens: any, connectors: any}} # Successful Response\n@errors {422: Validation Error}\n\n@endgroup\n\n@group agents\n@endpoint POST /v1/agents\n@desc Create a agent that can be used within a conversation.\n@required {model: str, name: str}\n@optional {instructions: any # Instruction prompt the model will follow during the conversation., tools: [any] # List of tools which are available to the model during the conversation., completion_args: map{stop: any, presence_penalty: any, frequency_penalty: any, temperature: any, top_p: any, max_tokens: any, random_seed: any, prediction: any, response_format: any, tool_choice: str} # White-listed arguments from the completion API, description: any, handoffs: any, metadata: any}\n@returns(200) {instructions: any, tools: [any], completion_args: map{stop: any, presence_penalty: any, frequency_penalty: any, temperature: any, top_p: any, max_tokens: any, random_seed: any, prediction: any, response_format: any, tool_choice: str}, model: str, name: str, description: any, handoffs: any, metadata: any, object: str, id: str, version: int, versions: [int], created_at: str(date-time), updated_at: str(date-time), deployment_chat: bool, source: str} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint GET /v1/agents\n@desc List agent entities.\n@optional {page: int=0 # Page number (0-indexed), page_size: int=20 # Number of agents per page, deployment_chat: any, sources: any, name: any, id: any, metadata: any}\n@returns(200) Successful Response\n@errors {422: Validation Error}\n\n@endpoint GET /v1/agents/{agent_id}\n@desc Retrieve an agent entity.\n@required {agent_id: str}\n@optional {agent_version: any}\n@returns(200) {instructions: any, tools: [any], completion_args: map{stop: any, presence_penalty: any, frequency_penalty: any, temperature: any, top_p: any, max_tokens: any, random_seed: any, prediction: any, response_format: any, tool_choice: str}, model: str, name: str, description: any, handoffs: any, metadata: any, object: str, id: str, version: int, versions: [int], created_at: str(date-time), updated_at: str(date-time), deployment_chat: bool, source: str} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint PATCH /v1/agents/{agent_id}\n@desc Update an agent entity.\n@required {agent_id: str}\n@optional {instructions: any # Instruction prompt the model will follow during the conversation., tools: [any] # List of tools which are available to the model during the conversation., completion_args: map{stop: any, presence_penalty: any, frequency_penalty: any, temperature: any, top_p: any, max_tokens: any, random_seed: any, prediction: any, response_format: any, tool_choice: str} # White-listed arguments from the completion API, model: any, name: any, description: any, handoffs: any, deployment_chat: any, metadata: any}\n@returns(200) {instructions: any, tools: [any], completion_args: map{stop: any, presence_penalty: any, frequency_penalty: any, temperature: any, top_p: any, max_tokens: any, random_seed: any, prediction: any, response_format: any, tool_choice: str}, model: str, name: str, description: any, handoffs: any, metadata: any, object: str, id: str, version: int, versions: [int], created_at: str(date-time), updated_at: str(date-time), deployment_chat: bool, source: str} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint DELETE /v1/agents/{agent_id}\n@desc Delete an agent entity.\n@required {agent_id: str}\n@returns(204) Successful Response\n@errors {422: Validation Error}\n\n@endpoint PATCH /v1/agents/{agent_id}/version\n@desc Update an agent version.\n@required {agent_id: str, version: int}\n@returns(200) {instructions: any, tools: [any], completion_args: map{stop: any, presence_penalty: any, frequency_penalty: any, temperature: any, top_p: any, max_tokens: any, random_seed: any, prediction: any, response_format: any, tool_choice: str}, model: str, name: str, description: any, handoffs: any, metadata: any, object: str, id: str, version: int, versions: [int], created_at: str(date-time), updated_at: str(date-time), deployment_chat: bool, source: str} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint GET /v1/agents/{agent_id}/versions\n@desc List all versions of an agent.\n@required {agent_id: str}\n@optional {page: int=0 # Page number (0-indexed), page_size: int=20 # Number of versions per page}\n@returns(200) Successful Response\n@errors {422: Validation Error}\n\n@endpoint GET /v1/agents/{agent_id}/versions/{version}\n@desc Retrieve a specific version of an agent.\n@required {agent_id: str, version: str}\n@returns(200) {instructions: any, tools: [any], completion_args: map{stop: any, presence_penalty: any, frequency_penalty: any, temperature: any, top_p: any, max_tokens: any, random_seed: any, prediction: any, response_format: any, tool_choice: str}, model: str, name: str, description: any, handoffs: any, metadata: any, object: str, id: str, version: int, versions: [int], created_at: str(date-time), updated_at: str(date-time), deployment_chat: bool, source: str} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint PUT /v1/agents/{agent_id}/aliases\n@desc Create or update an agent version alias.\n@required {agent_id: str, alias: str, version: int}\n@returns(200) {alias: str, version: int, created_at: str(date-time), updated_at: str(date-time)} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint GET /v1/agents/{agent_id}/aliases\n@desc List all aliases for an agent.\n@required {agent_id: str}\n@returns(200) Successful Response\n@errors {422: Validation Error}\n\n@endgroup\n\n@group conversations#stream\n@endpoint POST /v1/conversations#stream\n@desc Create a conversation and append entries to it.\n@returns(200) Successful Response\n@errors {422: Validation Error}\n\n@endgroup\n\n@group conversations\n@endpoint POST /v1/conversations/{conversation_id}#stream\n@desc Append new entries to an existing conversation.\n@required {conversation_id: str # ID of the conversation to which we append entries.}\n@returns(200) Successful Response\n@errors {422: Validation Error}\n\n@endpoint POST /v1/conversations/{conversation_id}/restart#stream\n@desc Restart a conversation starting from a given entry.\n@required {conversation_id: str # ID of the original conversation which is being restarted.}\n@returns(200) Successful Response\n@errors {422: Validation Error}\n\n@endgroup\n\n@group files\n@endpoint POST /v1/files\n@desc Upload File\n@returns(200) {id: str(uuid), object: str, bytes: int, created_at: int, filename: str, purpose: str, sample_type: str, num_lines: any, mimetype: any, source: str, signature: any} # OK\n\n@endpoint GET /v1/files\n@desc List Files\n@optional {page: int=0, page_size: int=100, include_total: bool=True, sample_type: any, source: any, search: any, purpose: any, mimetypes: any}\n@returns(200) {data: [map], object: str, total: any} # OK\n\n@endpoint GET /v1/files/{file_id}\n@desc Retrieve File\n@required {file_id: str(uuid)}\n@returns(200) {id: str(uuid), object: str, bytes: int, created_at: int, filename: str, purpose: str, sample_type: str, num_lines: any, mimetype: any, source: str, signature: any, deleted: bool} # OK\n\n@endpoint DELETE /v1/files/{file_id}\n@desc Delete File\n@required {file_id: str(uuid)}\n@returns(200) {id: str(uuid), object: str, deleted: bool} # OK\n\n@endpoint GET /v1/files/{file_id}/content\n@desc Download File\n@required {file_id: str(uuid)}\n@returns(200) OK\n\n@endpoint GET /v1/files/{file_id}/url\n@desc Get Signed Url\n@required {file_id: str(uuid)}\n@optional {expiry: int=24 # Number of hours before the url becomes invalid. Defaults to 24h}\n@returns(200) {url: str} # OK\n\n@endgroup\n\n@group fine_tuning\n@endpoint GET /v1/fine_tuning/jobs\n@desc Get Fine Tuning Jobs\n@optional {page: int=0 # The page number of the results to be returned., page_size: int=100 # The number of items to return per page., model: any # The model name used for fine-tuning to filter on. When set, the other results are not displayed., created_after: any # The date/time to filter on. When set, the results for previous creation times are not displayed., created_before: any, created_by_me: bool=False # When set, only return results for jobs created by the API caller. Other results are not displayed., status: any # The current job state to filter on. When set, the other results are not displayed., wandb_project: any # The Weights and Biases project to filter on. When set, the other results are not displayed., wandb_name: any # The Weight and Biases run name to filter on. When set, the other results are not displayed., suffix: any # The model suffix to filter on. When set, the other results are not displayed.}\n@returns(200) {data: [any], object: str, total: int} # OK\n\n@endpoint POST /v1/fine_tuning/jobs\n@desc Create Fine Tuning Job\n@required {model: str(ministral-3b-latest/ministral-8b-latest/open-mistral-7b/open-mistral-nemo/mistral-small-latest/mistral-medium-latest/mistral-large-latest/pixtral-12b-latest/codestral-latest) # The name of the model to fine-tune., hyperparameters: any}\n@optional {dry_run: any # * If `true` the job is not spawned, instead the query returns a handful of useful metadata   for the user to perform sanity checks (see `LegacyJobMetadataOut` response). * Otherwise, the job is started and the query returns the job ID along with some of the   input parameters (see `JobOut` response)., training_files: [map{file_id!: str(uuid), weight: num}]=[], validation_files: any # A list containing the IDs of uploaded files that contain validation data. If you provide these files, the data is used to generate validation metrics periodically during fine-tuning. These metrics can be viewed in `checkpoints` when getting the status of a running fine-tuning job. The same data should not be present in both train and validation files., suffix: any # A string that will be added to your fine-tuning model name. For example, a suffix of \"my-great-model\" would produce a model name like `ft:open-mistral-7b:my-great-model:xxx...`, integrations: any # A list of integrations to enable for your fine-tuning job., auto_start: bool # This field will be required in a future release., invalid_sample_skip_percentage: num=0, job_type: any, repositories: any, classifier_targets: any}\n@returns(200) OK\n\n@endpoint GET /v1/fine_tuning/jobs/{job_id}\n@desc Get Fine Tuning Job\n@required {job_id: str(uuid) # The ID of the job to analyse.}\n@returns(200) OK\n\n@endpoint POST /v1/fine_tuning/jobs/{job_id}/cancel\n@desc Cancel Fine Tuning Job\n@required {job_id: str(uuid) # The ID of the job to cancel.}\n@returns(200) OK\n\n@endpoint POST /v1/fine_tuning/jobs/{job_id}/start\n@desc Start Fine Tuning Job\n@required {job_id: str(uuid)}\n@returns(200) OK\n\n@endpoint PATCH /v1/fine_tuning/models/{model_id}\n@desc Update Fine Tuned Model\n@required {model_id: str # The ID of the model to update.}\n@optional {name: any, description: any}\n@returns(200) OK\n\n@endpoint POST /v1/fine_tuning/models/{model_id}/archive\n@desc Archive Fine Tuned Model\n@required {model_id: str # The ID of the model to archive.}\n@returns(200) {id: str, object: str, archived: bool} # OK\n\n@endpoint DELETE /v1/fine_tuning/models/{model_id}/archive\n@desc Unarchive Fine Tuned Model\n@required {model_id: str # The ID of the model to unarchive.}\n@returns(200) {id: str, object: str, archived: bool} # OK\n\n@endgroup\n\n@group batch\n@endpoint GET /v1/batch/jobs\n@desc Get Batch Jobs\n@optional {page: int=0, page_size: int=100, model: any, agent_id: any, metadata: any, created_after: any, created_by_me: bool=False, status: any}\n@returns(200) {data: [map], object: str, total: int} # OK\n\n@endpoint POST /v1/batch/jobs\n@desc Create Batch Job\n@required {endpoint: str(/v1/chat/completions//v1/embeddings//v1/fim/completions//v1/moderations//v1/chat/moderations//v1/ocr//v1/classifications//v1/chat/classifications//v1/conversations//v1/audio/transcriptions)}\n@optional {input_files: any # The list of input files to be used for batch inference, these files should be `jsonl` files, containing the input data corresponding to the bory request for the batch inference in a \"body\" field. An example of such file is the following: ```json {\"custom_id\": \"0\", \"body\": {\"max_tokens\": 100, \"messages\": [{\"role\": \"user\", \"content\": \"What is the best French cheese?\"}]}} {\"custom_id\": \"1\", \"body\": {\"max_tokens\": 100, \"messages\": [{\"role\": \"user\", \"content\": \"What is the best French wine?\"}]}} ```, requests: any, model: any # The model to be used for batch inference., agent_id: any # In case you want to use a specific agent from the **deprecated** agents api for batch inference, you can specify the agent ID here., metadata: any # The metadata of your choice to be associated with the batch inference job., timeout_hours: int=24 # The timeout in hours for the batch inference job.}\n@returns(200) {id: str, object: str, input_files: [str(uuid)], metadata: any, endpoint: str, model: any, agent_id: any, output_file: any, error_file: any, errors: [map], outputs: any, status: str, created_at: int, total_requests: int, completed_requests: int, succeeded_requests: int, failed_requests: int, started_at: any, completed_at: any} # OK\n\n@endpoint GET /v1/batch/jobs/{job_id}\n@desc Get Batch Job\n@required {job_id: str(uuid)}\n@optional {inline: any}\n@returns(200) {id: str, object: str, input_files: [str(uuid)], metadata: any, endpoint: str, model: any, agent_id: any, output_file: any, error_file: any, errors: [map], outputs: any, status: str, created_at: int, total_requests: int, completed_requests: int, succeeded_requests: int, failed_requests: int, started_at: any, completed_at: any} # OK\n\n@endpoint POST /v1/batch/jobs/{job_id}/cancel\n@desc Cancel Batch Job\n@required {job_id: str(uuid)}\n@returns(200) {id: str, object: str, input_files: [str(uuid)], metadata: any, endpoint: str, model: any, agent_id: any, output_file: any, error_file: any, errors: [map], outputs: any, status: str, created_at: int, total_requests: int, completed_requests: int, succeeded_requests: int, failed_requests: int, started_at: any, completed_at: any} # OK\n\n@endgroup\n\n@group chat\n@endpoint POST /v1/chat/completions\n@desc Chat Completion\n@required {model: str # ID of the model to use. You can use the [List Available Models](/api/#tag/models/operation/list_models_v1_models_get) API to see all of your available models, or see our [Model overview](/models) for model descriptions., messages: [any] # The prompt(s) to generate completions for, encoded as a list of dict with role and content.}\n@optional {temperature: any # What sampling temperature to use, we recommend between 0.0 and 0.7. Higher values like 0.7 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or `top_p` but not both. The default value varies depending on the model you are targeting. Call the `/models` endpoint to retrieve the appropriate value., top_p: num=1.0 # Nucleus sampling, where the model considers the results of the tokens with `top_p` probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or `temperature` but not both., max_tokens: any # The maximum number of tokens to generate in the completion. The token count of your prompt plus `max_tokens` cannot exceed the model's context length., stream: bool=False # Whether to stream back partial progress. If set, tokens will be sent as data-only server-side events as they become available, with the stream terminated by a data: [DONE] message. Otherwise, the server will hold the request open until the timeout or until completion, with the response containing the full result as JSON., stop: any # Stop generation if this token is detected. Or if one of these tokens is detected when providing an array, random_seed: any # The seed to use for random sampling. If set, different calls will generate deterministic results., metadata: any, response_format: map{type: str, json_schema: any} # Specify the format that the model must output. By default it will use `{ \"type\": \"text\" }`. Setting to `{ \"type\": \"json_object\" }` enables JSON mode, which guarantees the message the model generates is in JSON. When using JSON mode you MUST also instruct the model to produce JSON yourself with a system or a user message. Setting to `{ \"type\": \"json_schema\" }` enables JSON schema mode, which guarantees the message the model generates is in JSON and follows the schema you provide., tools: any # A list of tools the model may call. Use this to provide a list of functions the model may generate JSON inputs for., tool_choice: any=auto # Controls which (if any) tool is called by the model. `none` means the model will not call any tool and instead generates a message. `auto` means the model can pick between generating a message or calling one or more tools. `any` or `required` means the model must call one or more tools. Specifying a particular tool via `{\"type\": \"function\", \"function\": {\"name\": \"my_function\"}}` forces the model to call that tool., presence_penalty: num=0.0 # The `presence_penalty` determines how much the model penalizes the repetition of words or phrases. A higher presence penalty encourages the model to use a wider variety of words and phrases, making the output more diverse and creative., frequency_penalty: num=0.0 # The `frequency_penalty` penalizes the repetition of words based on their frequency in the generated text. A higher frequency penalty discourages the model from repeating words that have already appeared frequently in the output, promoting diversity and reducing repetition., n: any # Number of completions to return for each request, input tokens are only billed once., prediction: map{type: str, content: str} # Enable users to specify an expected completion, optimizing response times by leveraging known or predictable content., parallel_tool_calls: bool=True # Whether to enable parallel function calling during tool use, when enabled the model can call multiple tools in parallel., prompt_mode: any # Allows toggling between the reasoning mode and no system prompt. When set to `reasoning` the system prompt for reasoning models will be used., safe_prompt: bool=False # Whether to inject a safety prompt before all conversations.}\n@returns(200) Successful Response\n@errors {422: Validation Error}\n\n@endgroup\n\n@group fim\n@endpoint POST /v1/fim/completions\n@desc Fim Completion\n@required {model: str=codestral-2404 # ID of the model with FIM to use., prompt: str # The text/code to complete.}\n@optional {temperature: any # What sampling temperature to use, we recommend between 0.0 and 0.7. Higher values like 0.7 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or `top_p` but not both. The default value varies depending on the model you are targeting. Call the `/models` endpoint to retrieve the appropriate value., top_p: num=1.0 # Nucleus sampling, where the model considers the results of the tokens with `top_p` probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or `temperature` but not both., max_tokens: any # The maximum number of tokens to generate in the completion. The token count of your prompt plus `max_tokens` cannot exceed the model's context length., stream: bool=False # Whether to stream back partial progress. If set, tokens will be sent as data-only server-side events as they become available, with the stream terminated by a data: [DONE] message. Otherwise, the server will hold the request open until the timeout or until completion, with the response containing the full result as JSON., stop: any # Stop generation if this token is detected. Or if one of these tokens is detected when providing an array, random_seed: any # The seed to use for random sampling. If set, different calls will generate deterministic results., metadata: any, suffix: any= # Optional text/code that adds more context for the model. When given a `prompt` and a `suffix` the model will fill what is between them. When `suffix` is not provided, the model will simply execute completion starting with `prompt`., min_tokens: any # The minimum number of tokens to generate in the completion.}\n@returns(200) Successful Response\n@errors {422: Validation Error}\n\n@endgroup\n\n@group agents\n@endpoint POST /v1/agents/completions\n@desc Agents Completion\n@required {messages: [any] # The prompt(s) to generate completions for, encoded as a list of dict with role and content., agent_id: str # The ID of the agent to use for this completion.}\n@optional {max_tokens: any # The maximum number of tokens to generate in the completion. The token count of your prompt plus `max_tokens` cannot exceed the model's context length., stream: bool=False # Whether to stream back partial progress. If set, tokens will be sent as data-only server-side events as they become available, with the stream terminated by a data: [DONE] message. Otherwise, the server will hold the request open until the timeout or until completion, with the response containing the full result as JSON., stop: any # Stop generation if this token is detected. Or if one of these tokens is detected when providing an array, random_seed: any # The seed to use for random sampling. If set, different calls will generate deterministic results., metadata: any, response_format: map{type: str, json_schema: any} # Specify the format that the model must output. By default it will use `{ \"type\": \"text\" }`. Setting to `{ \"type\": \"json_object\" }` enables JSON mode, which guarantees the message the model generates is in JSON. When using JSON mode you MUST also instruct the model to produce JSON yourself with a system or a user message. Setting to `{ \"type\": \"json_schema\" }` enables JSON schema mode, which guarantees the message the model generates is in JSON and follows the schema you provide., tools: any, tool_choice: any=auto, presence_penalty: num=0.0 # The `presence_penalty` determines how much the model penalizes the repetition of words or phrases. A higher presence penalty encourages the model to use a wider variety of words and phrases, making the output more diverse and creative., frequency_penalty: num=0.0 # The `frequency_penalty` penalizes the repetition of words based on their frequency in the generated text. A higher frequency penalty discourages the model from repeating words that have already appeared frequently in the output, promoting diversity and reducing repetition., n: any # Number of completions to return for each request, input tokens are only billed once., prediction: map{type: str, content: str} # Enable users to specify an expected completion, optimizing response times by leveraging known or predictable content., parallel_tool_calls: bool=True, prompt_mode: any # Allows toggling between the reasoning mode and no system prompt. When set to `reasoning` the system prompt for reasoning models will be used.}\n@returns(200) Successful Response\n@errors {422: Validation Error}\n\n@endgroup\n\n@group embeddings\n@endpoint POST /v1/embeddings\n@desc Embeddings\n@required {model: str # The ID of the model to be used for embedding., input: any # The text content to be embedded, can be a string or an array of strings for fast processing in bulk.}\n@optional {metadata: any, output_dimension: any # The dimension of the output embeddings when feature available. If not provided, a default output dimension will be used., output_dtype: str(float/int8/uint8/binary/ubinary), encoding_format: str(float/base64)}\n@returns(200) Successful Response\n@errors {422: Validation Error}\n\n@endgroup\n\n@group moderations\n@endpoint POST /v1/moderations\n@desc Moderations\n@required {model: str # ID of the model to use., input: any # Text to classify.}\n@optional {metadata: any}\n@returns(200) {id: str, model: str, results: [map]} # Successful Response\n@errors {422: Validation Error}\n\n@endgroup\n\n@group chat\n@endpoint POST /v1/chat/moderations\n@desc Chat Moderations\n@required {input: any # Chat to classify, model: str}\n@returns(200) {id: str, model: str, results: [map]} # Successful Response\n@errors {422: Validation Error}\n\n@endgroup\n\n@group ocr\n@endpoint POST /v1/ocr\n@desc OCR\n@required {model: any, document: any # Document to run OCR on}\n@optional {id: str, pages: any # Specific pages user wants to process in various formats: single number, range, or list of both. Starts from 0, include_image_base64: any # Include image URLs in response, image_limit: any # Max images to extract, image_min_size: any # Minimum height and width of image to extract, bbox_annotation_format: any # Structured output class for extracting useful information from each extracted bounding box / image from document. Only json_schema is valid for this field, document_annotation_format: any # Structured output class for extracting useful information from the entire document. Only json_schema is valid for this field, document_annotation_prompt: any # Optional prompt to guide the model in extracting structured output from the entire document. A document_annotation_format must be provided., table_format: any, extract_header: bool=False, extract_footer: bool=False}\n@returns(200) {pages: [map], model: str, document_annotation: any, usage_info: map{pages_processed: int, doc_size_bytes: any}} # Successful Response\n@errors {422: Validation Error}\n\n@endgroup\n\n@group classifications\n@endpoint POST /v1/classifications\n@desc Classifications\n@required {model: str # ID of the model to use., input: any # Text to classify.}\n@optional {metadata: any}\n@returns(200) {id: str, model: str, results: [map]} # Successful Response\n@errors {422: Validation Error}\n\n@endgroup\n\n@group chat\n@endpoint POST /v1/chat/classifications\n@desc Chat Classifications\n@required {model: str, input: any # Chat to classify}\n@returns(200) {id: str, model: str, results: [map]} # Successful Response\n@errors {422: Validation Error}\n\n@endgroup\n\n@group audio\n@endpoint POST /v1/audio/transcriptions\n@desc Create Transcription\n@returns(200) {model: str, text: str, language: any, segments: [map], usage: map{prompt_tokens: int, completion_tokens: int, total_tokens: int, prompt_audio_seconds: any}} # Successful Response\n\n@endpoint POST /v1/audio/transcriptions#stream\n@desc Create Streaming Transcription (SSE)\n@returns(200) Stream of transcription events\n\n@endgroup\n\n@group libraries\n@endpoint GET /v1/libraries\n@desc List all libraries you have access to.\n@returns(200) {data: [map]} # Successful Response\n\n@endpoint POST /v1/libraries\n@desc Create a new Library.\n@required {name: str}\n@optional {description: any, chunk_size: any}\n@returns(201) {id: str(uuid), name: str, created_at: str(date-time), updated_at: str(date-time), owner_id: any, owner_type: str, total_size: int, nb_documents: int, chunk_size: any, emoji: any, description: any, generated_description: any, explicit_user_members_count: any, explicit_workspace_members_count: any, org_sharing_role: any, generated_name: any} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint GET /v1/libraries/{library_id}\n@desc Detailed information about a specific Library.\n@required {library_id: str(uuid)}\n@returns(200) {id: str(uuid), name: str, created_at: str(date-time), updated_at: str(date-time), owner_id: any, owner_type: str, total_size: int, nb_documents: int, chunk_size: any, emoji: any, description: any, generated_description: any, explicit_user_members_count: any, explicit_workspace_members_count: any, org_sharing_role: any, generated_name: any} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint DELETE /v1/libraries/{library_id}\n@desc Delete a library and all of it's document.\n@required {library_id: str(uuid)}\n@returns(200) {id: str(uuid), name: str, created_at: str(date-time), updated_at: str(date-time), owner_id: any, owner_type: str, total_size: int, nb_documents: int, chunk_size: any, emoji: any, description: any, generated_description: any, explicit_user_members_count: any, explicit_workspace_members_count: any, org_sharing_role: any, generated_name: any} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint PUT /v1/libraries/{library_id}\n@desc Update a library.\n@required {library_id: str(uuid)}\n@optional {name: any, description: any}\n@returns(200) {id: str(uuid), name: str, created_at: str(date-time), updated_at: str(date-time), owner_id: any, owner_type: str, total_size: int, nb_documents: int, chunk_size: any, emoji: any, description: any, generated_description: any, explicit_user_members_count: any, explicit_workspace_members_count: any, org_sharing_role: any, generated_name: any} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint GET /v1/libraries/{library_id}/documents\n@desc List documents in a given library.\n@required {library_id: str(uuid)}\n@optional {search: any, page_size: int=100, page: int=0, filters_attributes: any, sort_by: str=created_at, sort_order: str=desc}\n@returns(200) {pagination: map{total_items: int, total_pages: int, current_page: int, page_size: int, has_more: bool}, data: [map]} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint POST /v1/libraries/{library_id}/documents\n@desc Upload a new document.\n@required {library_id: str(uuid)}\n@returns(201) {id: str(uuid), library_id: str(uuid), hash: any, mime_type: any, extension: any, size: any, name: str, summary: any, created_at: str(date-time), last_processed_at: any, number_of_pages: any, processing_status: str, uploaded_by_id: any, uploaded_by_type: str, tokens_processing_main_content: any, tokens_processing_summary: any, url: any, attributes: any, tokens_processing_total: int} # Upload successful, returns the created document information's.\n@returns(200) {id: str(uuid), library_id: str(uuid), hash: any, mime_type: any, extension: any, size: any, name: str, summary: any, created_at: str(date-time), last_processed_at: any, number_of_pages: any, processing_status: str, uploaded_by_id: any, uploaded_by_type: str, tokens_processing_main_content: any, tokens_processing_summary: any, url: any, attributes: any, tokens_processing_total: int} # A document with the same hash was found in this library. Returns the existing document.\n@errors {422: Validation Error}\n\n@endpoint GET /v1/libraries/{library_id}/documents/{document_id}\n@desc Retrieve the metadata of a specific document.\n@required {library_id: str(uuid), document_id: str(uuid)}\n@returns(200) {id: str(uuid), library_id: str(uuid), hash: any, mime_type: any, extension: any, size: any, name: str, summary: any, created_at: str(date-time), last_processed_at: any, number_of_pages: any, processing_status: str, uploaded_by_id: any, uploaded_by_type: str, tokens_processing_main_content: any, tokens_processing_summary: any, url: any, attributes: any, tokens_processing_total: int} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint PUT /v1/libraries/{library_id}/documents/{document_id}\n@desc Update the metadata of a specific document.\n@required {library_id: str(uuid), document_id: str(uuid)}\n@optional {name: any, attributes: any}\n@returns(200) {id: str(uuid), library_id: str(uuid), hash: any, mime_type: any, extension: any, size: any, name: str, summary: any, created_at: str(date-time), last_processed_at: any, number_of_pages: any, processing_status: str, uploaded_by_id: any, uploaded_by_type: str, tokens_processing_main_content: any, tokens_processing_summary: any, url: any, attributes: any, tokens_processing_total: int} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint DELETE /v1/libraries/{library_id}/documents/{document_id}\n@desc Delete a document.\n@required {library_id: str(uuid), document_id: str(uuid)}\n@returns(204) Successful Response\n@errors {422: Validation Error}\n\n@endpoint GET /v1/libraries/{library_id}/documents/{document_id}/text_content\n@desc Retrieve the text content of a specific document.\n@required {library_id: str(uuid), document_id: str(uuid)}\n@returns(200) {text: str} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint GET /v1/libraries/{library_id}/documents/{document_id}/status\n@desc Retrieve the processing status of a specific document.\n@required {library_id: str(uuid), document_id: str(uuid)}\n@returns(200) {document_id: str(uuid), processing_status: str} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint GET /v1/libraries/{library_id}/documents/{document_id}/signed-url\n@desc Retrieve the signed URL of a specific document.\n@required {library_id: str(uuid), document_id: str(uuid)}\n@returns(200) Successful Response\n@errors {422: Validation Error}\n\n@endpoint GET /v1/libraries/{library_id}/documents/{document_id}/extracted-text-signed-url\n@desc Retrieve the signed URL of text extracted from a given document.\n@required {library_id: str(uuid), document_id: str(uuid)}\n@returns(200) Successful Response\n@errors {422: Validation Error}\n\n@endpoint POST /v1/libraries/{library_id}/documents/{document_id}/reprocess\n@desc Reprocess a document.\n@required {library_id: str(uuid), document_id: str(uuid)}\n@returns(204) Successful Response\n@errors {422: Validation Error}\n\n@endpoint GET /v1/libraries/{library_id}/share\n@desc List all of the access to this library.\n@required {library_id: str(uuid)}\n@returns(200) {data: [map]} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint PUT /v1/libraries/{library_id}/share\n@desc Create or update an access level.\n@required {library_id: str(uuid), level: str(Viewer/Editor), share_with_uuid: str(uuid) # The id of the entity (user, workspace or organization) to share with, share_with_type: str(User/Workspace/Org) # The type of entity, used to share a library.}\n@optional {org_id: any}\n@returns(200) {library_id: str(uuid), user_id: any, org_id: str(uuid), role: str, share_with_type: str, share_with_uuid: any} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint DELETE /v1/libraries/{library_id}/share\n@desc Delete an access level.\n@required {library_id: str(uuid), share_with_uuid: str(uuid) # The id of the entity (user, workspace or organization) to share with, share_with_type: str(User/Workspace/Org) # The type of entity, used to share a library.}\n@optional {org_id: any}\n@returns(200) {library_id: str(uuid), user_id: any, org_id: str(uuid), role: str, share_with_type: str, share_with_uuid: any} # Successful Response\n@errors {422: Validation Error}\n\n@endgroup\n\n@end\n"}}