{"files":{"SKILL.md":"---\nname: elevenlabs-api-documentation\ndescription: \"ElevenLabs API Documentation API skill. Use when working with ElevenLabs API Documentation for history, sound-generation, audio-isolation. Covers 274 endpoints.\"\nversion: 1.0.0\ngenerator: lapsh\n---\n\n# ElevenLabs API Documentation\nAPI version: 1.0\n\n## Auth\nApiKey xi-api-key in header\n\n## Base URL\nNot specified.\n\n## Setup\n1. Set your API key in the appropriate header\n2. GET /v1/history -- list generated items\n3. POST /v1/history/download -- create first download\n\n## Endpoints\n274 endpoints across 25 groups. See references/api-spec.lap for full details.\n\n### History\n| Method | Path | Description |\n|--------|------|-------------|\n| GET | /v1/history | List Generated Items |\n| GET | /v1/history/{history_item_id} | Get History Item |\n| DELETE | /v1/history/{history_item_id} | Delete History Item |\n| GET | /v1/history/{history_item_id}/audio | Get Audio From History Item |\n| POST | /v1/history/download | Download History Items |\n\n### Sound-generation\n| Method | Path | Description |\n|--------|------|-------------|\n| POST | /v1/sound-generation | Sound Generation |\n\n### Audio-isolation\n| Method | Path | Description |\n|--------|------|-------------|\n| POST | /v1/audio-isolation | Audio Isolation |\n| POST | /v1/audio-isolation/stream | Audio Isolation Stream |\n\n### Voices\n| Method | Path | Description |\n|--------|------|-------------|\n| DELETE | /v1/voices/{voice_id}/samples/{sample_id} | Delete Sample |\n| GET | /v1/voices/{voice_id}/samples/{sample_id}/audio | Get Audio From Sample |\n| GET | /v1/voices | List Voices |\n| GET | /v2/voices | Get Voices V2 |\n| GET | /v1/voices/settings/default | Get Default Voice Settings. |\n| GET | /v1/voices/{voice_id}/settings | Get Voice Settings |\n| GET | /v1/voices/{voice_id} | Get Voice |\n| DELETE | /v1/voices/{voice_id} | Delete Voice |\n| POST | /v1/voices/{voice_id}/settings/edit | Edit Voice Settings |\n| POST | /v1/voices/add | Add Voice |\n| POST | /v1/voices/{voice_id}/edit | Edit Voice |\n| POST | /v1/voices/add/{public_user_id}/{voice_id} | Add Shared Voice |\n| POST | /v1/voices/pvc | Create Pvc Voice |\n| POST | /v1/voices/pvc/{voice_id} | Edit Pvc Voice |\n| POST | /v1/voices/pvc/{voice_id}/samples | Add Samples To Pvc Voice |\n| POST | /v1/voices/pvc/{voice_id}/samples/{sample_id} | Update Pvc Voice Sample |\n| DELETE | /v1/voices/pvc/{voice_id}/samples/{sample_id} | Delete Pvc Voice Sample |\n| GET | /v1/voices/pvc/{voice_id}/samples/{sample_id}/audio | Retrieve Voice Sample Audio |\n| GET | /v1/voices/pvc/{voice_id}/samples/{sample_id}/waveform | Retrieve Voice Sample Visual Waveform |\n| GET | /v1/voices/pvc/{voice_id}/samples/{sample_id}/speakers | Retrieve Speaker Separation Status |\n| POST | /v1/voices/pvc/{voice_id}/samples/{sample_id}/separate-speakers | Start Speaker Separation |\n| GET | /v1/voices/pvc/{voice_id}/samples/{sample_id}/speakers/{speaker_id}/audio | Retrieve Separated Speaker Audio |\n| GET | /v1/voices/pvc/{voice_id}/captcha | Get Pvc Voice Captcha |\n| POST | /v1/voices/pvc/{voice_id}/captcha | Verify Pvc Voice Captcha |\n| POST | /v1/voices/pvc/{voice_id}/train | Run Pvc Training |\n| POST | /v1/voices/pvc/{voice_id}/verification | Request Manual Verification |\n\n### Text-to-speech\n| Method | Path | Description |\n|--------|------|-------------|\n| POST | /v1/text-to-speech/{voice_id} | Text To Speech |\n| POST | /v1/text-to-speech/{voice_id}/with-timestamps | Text To Speech With Timestamps |\n| POST | /v1/text-to-speech/{voice_id}/stream | Text To Speech Streaming |\n| POST | /v1/text-to-speech/{voice_id}/stream/with-timestamps | Text To Speech Streaming With Timestamps |\n\n### Text-to-dialogue\n| Method | Path | Description |\n|--------|------|-------------|\n| POST | /v1/text-to-dialogue | Text To Dialogue (Multi-Voice) |\n| POST | /v1/text-to-dialogue/stream | Text To Dialogue (Multi-Voice) Streaming |\n| POST | /v1/text-to-dialogue/stream/with-timestamps | Text To Dialogue Streaming With Timestamps |\n| POST | /v1/text-to-dialogue/with-timestamps | Text To Dialogue With Timestamps |\n\n### Speech-to-speech\n| Method | Path | Description |\n|--------|------|-------------|\n| POST | /v1/speech-to-speech/{voice_id} | Speech To Speech |\n| POST | /v1/speech-to-speech/{voice_id}/stream | Speech To Speech Streaming |\n\n### Text-to-voice\n| Method | Path | Description |\n|--------|------|-------------|\n| POST | /v1/text-to-voice/create-previews | Generate A Voice Preview From Description |\n| POST | /v1/text-to-voice | Create A New Voice From Voice Preview |\n| POST | /v1/text-to-voice/design | Design A Voice. |\n| POST | /v1/text-to-voice/{voice_id}/remix | Remix A Voice. |\n| GET | /v1/text-to-voice/{generated_voice_id}/stream | Text To Voice Preview Streaming |\n\n### User\n| Method | Path | Description |\n|--------|------|-------------|\n| GET | /v1/user/subscription | Get User Subscription Info |\n| GET | /v1/user | Get User Info |\n\n### Studio\n| Method | Path | Description |\n|--------|------|-------------|\n| POST | /v1/studio/podcasts | Create Podcast |\n| POST | /v1/studio/projects/{project_id}/pronunciation-dictionaries | Create Pronunciation Dictionaries |\n| GET | /v1/studio/projects | List Studio Projects |\n| POST | /v1/studio/projects | Create Studio Project |\n| POST | /v1/studio/projects/{project_id} | Update Studio Project |\n| GET | /v1/studio/projects/{project_id} | Get Studio Project |\n| DELETE | /v1/studio/projects/{project_id} | Delete Studio Project |\n| POST | /v1/studio/projects/{project_id}/content | Update Studio Project Content |\n| POST | /v1/studio/projects/{project_id}/convert | Convert Studio Project |\n| GET | /v1/studio/projects/{project_id}/snapshots | List Studio Project Snapshots |\n| GET | /v1/studio/projects/{project_id}/snapshots/{project_snapshot_id} | Get Project Snapshot |\n| POST | /v1/studio/projects/{project_id}/snapshots/{project_snapshot_id}/stream | Stream Studio Project Audio |\n| POST | /v1/studio/projects/{project_id}/snapshots/{project_snapshot_id}/archive | Stream Archive With Studio Project Audio |\n| GET | /v1/studio/projects/{project_id}/chapters | List Chapters |\n| POST | /v1/studio/projects/{project_id}/chapters | Create Chapter |\n| GET | /v1/studio/projects/{project_id}/chapters/{chapter_id} | Get Chapter |\n| POST | /v1/studio/projects/{project_id}/chapters/{chapter_id} | Update Chapter |\n| DELETE | /v1/studio/projects/{project_id}/chapters/{chapter_id} | Delete Chapter |\n| POST | /v1/studio/projects/{project_id}/chapters/{chapter_id}/convert | Convert Chapter |\n| GET | /v1/studio/projects/{project_id}/chapters/{chapter_id}/snapshots | List Chapter Snapshots |\n| GET | /v1/studio/projects/{project_id}/chapters/{chapter_id}/snapshots/{chapter_snapshot_id} | Get Chapter Snapshot |\n| POST | /v1/studio/projects/{project_id}/chapters/{chapter_id}/snapshots/{chapter_snapshot_id}/stream | Stream Chapter Audio |\n| GET | /v1/studio/projects/{project_id}/muted-tracks | Get Project Muted Tracks |\n\n### Music\n| Method | Path | Description |\n|--------|------|-------------|\n| POST | /v1/music/video-to-music | Video To Music |\n| POST | /v1/music/plan | Generate Composition Plan |\n| POST | /v1/music | Compose Music |\n| POST | /v1/music/detailed | Compose Music With A Detailed Response |\n| POST | /v1/music/stream | Stream Composed Music |\n| POST | /v1/music/upload | Upload Music |\n| POST | /v1/music/stem-separation | Stem Separation |\n\n### Dubbing\n| Method | Path | Description |\n|--------|------|-------------|\n| GET | /v1/dubbing/resource/{dubbing_id} | Get The Dubbing Resource For An Id. |\n| POST | /v1/dubbing/resource/{dubbing_id}/language | Add A Language To The Resource |\n| POST | /v1/dubbing/resource/{dubbing_id}/speaker/{speaker_id}/segment | Create A Segment For The Speaker |\n| PATCH | /v1/dubbing/resource/{dubbing_id}/segment/{segment_id}/{language} | Modify A Single Segment |\n| POST | /v1/dubbing/resource/{dubbing_id}/migrate-segments | Move Segments Between Speakers |\n| DELETE | /v1/dubbing/resource/{dubbing_id}/segment/{segment_id} | Deletes A Single Segment |\n| POST | /v1/dubbing/resource/{dubbing_id}/transcribe | Transcribes Segments |\n| POST | /v1/dubbing/resource/{dubbing_id}/translate | Translates All Or Some Segments And Languages |\n| POST | /v1/dubbing/resource/{dubbing_id}/dub | Dubs All Or Some Segments And Languages |\n| PATCH | /v1/dubbing/resource/{dubbing_id}/speaker/{speaker_id} | Update Metadata For A Speaker |\n| POST | /v1/dubbing/resource/{dubbing_id}/speaker | Create A New Speaker |\n| GET | /v1/dubbing/resource/{dubbing_id}/speaker/{speaker_id}/similar-voices | Search The Elevenlabs Library For Voices Similar To A Speaker. |\n| POST | /v1/dubbing/resource/{dubbing_id}/render/{language} | Render Audio Or Video For The Given Language |\n| GET | /v1/dubbing | List Dubs |\n| POST | /v1/dubbing | Dub A Video Or An Audio File |\n| GET | /v1/dubbing/{dubbing_id} | Get Dubbing |\n| DELETE | /v1/dubbing/{dubbing_id} | Delete Dubbing |\n| GET | /v1/dubbing/{dubbing_id}/audio/{language_code} | Get Dubbed File |\n| GET | /v1/dubbing/{dubbing_id}/transcript/{language_code} | Get Dubbed Transcript |\n| GET | /v1/dubbing/{dubbing_id}/transcripts/{language_code}/format/{format_type} | Retrieve A Transcript |\n\n### Models\n| Method | Path | Description |\n|--------|------|-------------|\n| GET | /v1/models | Get Models |\n\n### Audio-native\n| Method | Path | Description |\n|--------|------|-------------|\n| POST | /v1/audio-native | Creates Audio Native Enabled Project. |\n| GET | /v1/audio-native/{project_id}/settings | Get Audio Native Project Settings |\n| POST | /v1/audio-native/{project_id}/content | Update Audio-Native Project Content |\n| POST | /v1/audio-native/content | Update Audio-Native Content From Url |\n\n### Shared-voices\n| Method | Path | Description |\n|--------|------|-------------|\n| GET | /v1/shared-voices | Get Voices |\n\n### Similar-voices\n| Method | Path | Description |\n|--------|------|-------------|\n| POST | /v1/similar-voices | Get Similar Library Voices |\n\n### Usage\n| Method | Path | Description |\n|--------|------|-------------|\n| GET | /v1/usage/character-stats | Get Characters Usage Metrics |\n\n### Pronunciation-dictionaries\n| Method | Path | Description |\n|--------|------|-------------|\n| POST | /v1/pronunciation-dictionaries/add-from-file | Add A Pronunciation Dictionary |\n| POST | /v1/pronunciation-dictionaries/add-from-rules | Add A Pronunciation Dictionary |\n| PATCH | /v1/pronunciation-dictionaries/{pronunciation_dictionary_id} | Update Pronunciation Dictionary |\n| GET | /v1/pronunciation-dictionaries/{pronunciation_dictionary_id} | Get Metadata For A Pronunciation Dictionary |\n| POST | /v1/pronunciation-dictionaries/{pronunciation_dictionary_id}/set-rules | Set Rules On The Pronunciation Dictionary |\n| POST | /v1/pronunciation-dictionaries/{pronunciation_dictionary_id}/add-rules | Add Rules To The Pronunciation Dictionary |\n| POST | /v1/pronunciation-dictionaries/{pronunciation_dictionary_id}/remove-rules | Remove Rules From The Pronunciation Dictionary |\n| GET | /v1/pronunciation-dictionaries/{dictionary_id}/{version_id}/download | Get A Pls File With A Pronunciation Dictionary Version Rules |\n| GET | /v1/pronunciation-dictionaries | Get Pronunciation Dictionaries |\n\n### Service-accounts\n| Method | Path | Description |\n|--------|------|-------------|\n| GET | /v1/service-accounts/{service_account_user_id}/api-keys | Get Service Account Api Keys Route |\n| POST | /v1/service-accounts/{service_account_user_id}/api-keys | Create Service Account Api Key |\n| PATCH | /v1/service-accounts/{service_account_user_id}/api-keys/{api_key_id} | Edit Service Account Api Key |\n| DELETE | /v1/service-accounts/{service_account_user_id}/api-keys/{api_key_id} | Delete Service Account Api Key |\n| GET | /v1/service-accounts | Get Workspace Service Accounts |\n\n### Workspace\n| Method | Path | Description |\n|--------|------|-------------|\n| POST | /v1/workspace/auth-connections | Create Workspace Auth Connection |\n| GET | /v1/workspace/auth-connections | Get Workspace Auth Connections |\n| DELETE | /v1/workspace/auth-connections/{auth_connection_id} | Delete Workspace Auth Connection |\n| GET | /v1/workspace/groups | Get All Groups |\n| GET | /v1/workspace/groups/search | Search User Groups |\n| POST | /v1/workspace/groups/{group_id}/members/remove | Delete Member From User Group |\n| POST | /v1/workspace/groups/{group_id}/members | Add Member To User Group |\n| POST | /v1/workspace/invites/add | Invite User |\n| POST | /v1/workspace/invites/add-bulk | Invite Multiple Users |\n| DELETE | /v1/workspace/invites | Delete Existing Invitation |\n| POST | /v1/workspace/members | Update Member |\n| GET | /v1/workspace/resources/{resource_id} | Get Resource |\n| POST | /v1/workspace/resources/{resource_id}/share | Share Workspace Resource |\n| POST | /v1/workspace/resources/{resource_id}/unshare | Unshare Workspace Resource |\n| GET | /v1/workspace/webhooks | List Workspace Webhooks |\n| POST | /v1/workspace/webhooks | Create Workspace Webhook |\n| PATCH | /v1/workspace/webhooks/{webhook_id} | Update Workspace Webhook |\n| DELETE | /v1/workspace/webhooks/{webhook_id} | Delete Workspace Webhook |\n\n### Speech-to-text\n| Method | Path | Description |\n|--------|------|-------------|\n| POST | /v1/speech-to-text | Speech To Text |\n| GET | /v1/speech-to-text/transcripts/{transcription_id} | Get Transcript By Id |\n| DELETE | /v1/speech-to-text/transcripts/{transcription_id} | Delete Transcript By Id |\n\n### Single-use-token\n| Method | Path | Description |\n|--------|------|-------------|\n| POST | /v1/single-use-token/{token_type} | Create Single Use Token |\n\n### Forced-alignment\n| Method | Path | Description |\n|--------|------|-------------|\n| POST | /v1/forced-alignment | Create Forced Alignment |\n\n### Convai\n| Method | Path | Description |\n|--------|------|-------------|\n| GET | /v1/convai/conversation/get-signed-url | Get Signed Url |\n| GET | /v1/convai/conversation/get_signed_url | Get Signed Url |\n| GET | /v1/convai/conversation/token | Get Webrtc Token |\n| POST | /v1/convai/twilio/outbound-call | Handle An Outbound Call Via Twilio |\n| POST | /v1/convai/twilio/register-call | Register A Twilio Call And Return Twiml |\n| POST | /v1/convai/whatsapp/outbound-call | Make An Outbound Call Via Whatsapp |\n| POST | /v1/convai/whatsapp/outbound-message | Send An Outbound Message Via Whatsapp |\n| POST | /v1/convai/agents/create | Create Agent |\n| GET | /v1/convai/agents/summaries | Get Agent Summaries |\n| GET | /v1/convai/agents/{agent_id} | Get Agent |\n| PATCH | /v1/convai/agents/{agent_id} | Patches An Agent Settings |\n| DELETE | /v1/convai/agents/{agent_id} | Delete Agent |\n| GET | /v1/convai/agents/{agent_id}/widget | Get Agent Widget Config |\n| GET | /v1/convai/agents/{agent_id}/link | Get Shareable Agent Link |\n| POST | /v1/convai/agents/{agent_id}/avatar | Post Agent Avatar |\n| GET | /v1/convai/agents | List Agents |\n| GET | /v1/convai/agent/{agent_id}/knowledge-base/size | Returns The Size Of The Agent'S Knowledge Base |\n| POST | /v1/convai/agent/{agent_id}/llm-usage/calculate | Calculate Expected Llm Usage For An Agent |\n| POST | /v1/convai/agents/{agent_id}/duplicate | Duplicate Agent |\n| POST | /v1/convai/agents/{agent_id}/simulate-conversation | Simulates A Conversation |\n| POST | /v1/convai/agents/{agent_id}/simulate-conversation/stream | Simulates A Conversation (Stream) |\n| POST | /v1/convai/agent-testing/create | Create Agent Response Test |\n| POST | /v1/convai/agent-testing/folders | Create Agent Test Folder |\n| GET | /v1/convai/agent-testing/folders/{folder_id} | Get Agent Test Folder By Id |\n| PATCH | /v1/convai/agent-testing/folders/{folder_id} | Update Agent Test Folder |\n| DELETE | /v1/convai/agent-testing/folders/{folder_id} | Delete Agent Test Folder |\n| POST | /v1/convai/agent-testing/bulk-move | Bulk Move Tests To Folder |\n| GET | /v1/convai/agent-testing/{test_id} | Get Agent Response Test By Id |\n| PUT | /v1/convai/agent-testing/{test_id} | Update Agent Response Test |\n| DELETE | /v1/convai/agent-testing/{test_id} | Delete Agent Response Test |\n| POST | /v1/convai/agent-testing/summaries | Get Agent Response Test Summaries By Ids |\n| GET | /v1/convai/agent-testing | List Agent Response Tests |\n| GET | /v1/convai/test-invocations | List Test Invocations |\n| POST | /v1/convai/agents/{agent_id}/run-tests | Run Tests On The Agent |\n| GET | /v1/convai/test-invocations/{test_invocation_id} | Get Test Invocation |\n| POST | /v1/convai/test-invocations/{test_invocation_id}/resubmit | Resubmit Tests |\n| GET | /v1/convai/conversations | Get Conversations |\n| GET | /v1/convai/users | Get Conversation Users |\n| GET | /v1/convai/conversations/{conversation_id} | Get Conversation Details |\n| DELETE | /v1/convai/conversations/{conversation_id} | Delete Conversation |\n| GET | /v1/convai/conversations/{conversation_id}/audio | Get Conversation Audio |\n| POST | /v1/convai/conversations/{conversation_id}/feedback | Send Conversation Feedback |\n| GET | /v1/convai/conversations/messages/text-search | Text Search Conversation Messages |\n| GET | /v1/convai/conversations/messages/smart-search | Smart Search Conversation Messages |\n| POST | /v1/convai/phone-numbers | Import Phone Number |\n| GET | /v1/convai/phone-numbers | List Phone Numbers |\n| GET | /v1/convai/phone-numbers/{phone_number_id} | Get Phone Number |\n| DELETE | /v1/convai/phone-numbers/{phone_number_id} | Delete Phone Number |\n| PATCH | /v1/convai/phone-numbers/{phone_number_id} | Update Phone Number |\n| POST | /v1/convai/llm-usage/calculate | Calculate Expected Llm Usage |\n| GET | /v1/convai/llm/list | List Available Llms |\n| POST | /v1/convai/conversations/{conversation_id}/files | Upload File |\n| DELETE | /v1/convai/conversations/{conversation_id}/files/{file_id} | Delete File Upload |\n| GET | /v1/convai/analytics/live-count | Get Live Count |\n| GET | /v1/convai/knowledge-base/summaries | Get Knowledge Base Summaries By Ids |\n| POST | /v1/convai/knowledge-base | Add To Knowledge Base |\n| GET | /v1/convai/knowledge-base | Get Knowledge Base List |\n| POST | /v1/convai/knowledge-base/url | Create Url Document |\n| POST | /v1/convai/knowledge-base/file | Create File Document |\n| POST | /v1/convai/knowledge-base/text | Create Text Document |\n| POST | /v1/convai/knowledge-base/folder | Create Folder |\n| PATCH | /v1/convai/knowledge-base/{documentation_id} | Update Document |\n| GET | /v1/convai/knowledge-base/{documentation_id} | Get Documentation From Knowledge Base |\n| DELETE | /v1/convai/knowledge-base/{documentation_id} | Delete Knowledge Base Document Or Folder |\n| POST | /v1/convai/knowledge-base/rag-index | Compute Rag Indexes In Batch |\n| GET | /v1/convai/knowledge-base/rag-index | Get Rag Index Overview. |\n| POST | /v1/convai/knowledge-base/{documentation_id}/refresh | Refresh Url Document Content |\n| POST | /v1/convai/knowledge-base/{documentation_id}/rag-index | Compute Rag Index. |\n| GET | /v1/convai/knowledge-base/{documentation_id}/rag-index | Get Rag Indexes Of The Specified Knowledgebase Document. |\n| DELETE | /v1/convai/knowledge-base/{documentation_id}/rag-index/{rag_index_id} | Delete Rag Index. |\n| GET | /v1/convai/knowledge-base/{documentation_id}/dependent-agents | Get Dependent Agents List |\n| GET | /v1/convai/knowledge-base/{documentation_id}/content | Get Document Content |\n| GET | /v1/convai/knowledge-base/{documentation_id}/source-file-url | Get Document Source File Url |\n| GET | /v1/convai/knowledge-base/{documentation_id}/chunk/{chunk_id} | Get Documentation Chunk From Knowledge Base |\n| POST | /v1/convai/knowledge-base/{document_id}/move | Move Entity To Folder |\n| POST | /v1/convai/knowledge-base/bulk-move | Bulk Move Entities To Folder |\n| POST | /v1/convai/tools | Add Tool |\n| GET | /v1/convai/tools | Get Tools |\n| GET | /v1/convai/tools/{tool_id} | Get Tool |\n| PATCH | /v1/convai/tools/{tool_id} | Update Tool |\n| DELETE | /v1/convai/tools/{tool_id} | Delete Tool |\n| GET | /v1/convai/tools/{tool_id}/dependent-agents | Get Dependent Agents List |\n| GET | /v1/convai/settings | Get Convai Settings |\n| PATCH | /v1/convai/settings | Update Convai Settings |\n| GET | /v1/convai/settings/dashboard | Get Convai Dashboard Settings |\n| PATCH | /v1/convai/settings/dashboard | Update Convai Dashboard Settings |\n| POST | /v1/convai/secrets | Create Convai Workspace Secret |\n| GET | /v1/convai/secrets | Get Convai Workspace Secrets |\n| DELETE | /v1/convai/secrets/{secret_id} | Delete Convai Workspace Secret |\n| PATCH | /v1/convai/secrets/{secret_id} | Update Convai Workspace Secret |\n| POST | /v1/convai/batch-calling/submit | Submit A Batch Call Request. |\n| GET | /v1/convai/batch-calling/workspace | Get All Batch Calls For A Workspace. |\n| GET | /v1/convai/batch-calling/{batch_id} | Get A Batch Call By Id. |\n| DELETE | /v1/convai/batch-calling/{batch_id} | Delete A Batch Call. |\n| POST | /v1/convai/batch-calling/{batch_id}/cancel | Cancel A Batch Call. |\n| POST | /v1/convai/batch-calling/{batch_id}/retry | Retry A Batch Call. |\n| POST | /v1/convai/sip-trunk/outbound-call | Handle An Outbound Call Via Sip Trunk |\n| POST | /v1/convai/mcp-servers | Create Mcp Server |\n| GET | /v1/convai/mcp-servers | List Mcp Servers |\n| GET | /v1/convai/mcp-servers/{mcp_server_id} | Get Mcp Server |\n| DELETE | /v1/convai/mcp-servers/{mcp_server_id} | Delete Mcp Server |\n| PATCH | /v1/convai/mcp-servers/{mcp_server_id} | Update Mcp Server Configuration |\n| GET | /v1/convai/mcp-servers/{mcp_server_id}/tools | List Mcp Server Tools |\n| PATCH | /v1/convai/mcp-servers/{mcp_server_id}/approval-policy | Update Mcp Server Approval Policy |\n| POST | /v1/convai/mcp-servers/{mcp_server_id}/tool-approvals | Create Mcp Server Tool Approval |\n| DELETE | /v1/convai/mcp-servers/{mcp_server_id}/tool-approvals/{tool_name} | Delete Mcp Server Tool Approval |\n| POST | /v1/convai/mcp-servers/{mcp_server_id}/tool-configs | Create Mcp Tool Configuration Override |\n| GET | /v1/convai/mcp-servers/{mcp_server_id}/tool-configs/{tool_name} | Get Mcp Tool Configuration Override |\n| PATCH | /v1/convai/mcp-servers/{mcp_server_id}/tool-configs/{tool_name} | Update Mcp Tool Configuration Override |\n| DELETE | /v1/convai/mcp-servers/{mcp_server_id}/tool-configs/{tool_name} | Delete Mcp Tool Configuration Override |\n| GET | /v1/convai/whatsapp-accounts/{phone_number_id} | Get Whatsapp Account |\n| PATCH | /v1/convai/whatsapp-accounts/{phone_number_id} | Update Whatsapp Account |\n| DELETE | /v1/convai/whatsapp-accounts/{phone_number_id} | Delete Whatsapp Account |\n| GET | /v1/convai/whatsapp-accounts | List Whatsapp Accounts |\n| POST | /v1/convai/agents/{agent_id}/branches | Create A New Branch |\n| GET | /v1/convai/agents/{agent_id}/branches | List Agent Branches |\n| GET | /v1/convai/agents/{agent_id}/branches/{branch_id} | Get Agent Branch |\n| PATCH | /v1/convai/agents/{agent_id}/branches/{branch_id} | Update Agent Branch |\n| POST | /v1/convai/agents/{agent_id}/branches/{source_branch_id}/merge | Merge A Branch Into A Target Branch |\n| POST | /v1/convai/agents/{agent_id}/deployments | Create Or Update Deployments |\n| POST | /v1/convai/agents/{agent_id}/drafts | Create Agent Draft |\n| DELETE | /v1/convai/agents/{agent_id}/drafts | Delete Agent Draft |\n| POST | /v1/convai/conversations/{conversation_id}/analysis/run | Run Conversation Analysis |\n| POST | /v1/convai/environment-variables | Create Environment Variable |\n| GET | /v1/convai/environment-variables | List Environment Variables |\n| GET | /v1/convai/environment-variables/{env_var_id} | Get Environment Variable |\n| PATCH | /v1/convai/environment-variables/{env_var_id} | Update Environment Variable |\n\n### Docs\n| Method | Path | Description |\n|--------|------|-------------|\n| GET | /docs | Redirect To Mintlify |\n\n## Common Questions\nMatch user requests to endpoints in references/api-spec.lap. Key patterns:\n- \"Search history?\" -> GET /v1/history\n- \"Get history details?\" -> GET /v1/history/{history_item_id}\n- \"Delete a history?\" -> DELETE /v1/history/{history_item_id}\n- \"List all audio?\" -> GET /v1/history/{history_item_id}/audio\n- \"Create a download?\" -> POST /v1/history/download\n- \"Create a sound-generation?\" -> POST /v1/sound-generation\n- \"Create a audio-isolation?\" -> POST /v1/audio-isolation\n- \"Create a stream?\" -> POST /v1/audio-isolation/stream\n- \"Delete a sample?\" -> DELETE /v1/voices/{voice_id}/samples/{sample_id}\n- \"Create a with-timestamp?\" -> POST /v1/text-to-speech/{voice_id}/with-timestamps\n- \"Create a text-to-dialogue?\" -> POST /v1/text-to-dialogue\n- \"Create a create-preview?\" -> POST /v1/text-to-voice/create-previews\n- \"Create a text-to-voice?\" -> POST /v1/text-to-voice\n- \"Create a design?\" -> POST /v1/text-to-voice/design\n- \"Create a remix?\" -> POST /v1/text-to-voice/{voice_id}/remix\n- \"List all stream?\" -> GET /v1/text-to-voice/{generated_voice_id}/stream\n- \"List all subscription?\" -> GET /v1/user/subscription\n- \"List all user?\" -> GET /v1/user\n- \"List all voices?\" -> GET /v1/voices\n- \"Search voices?\" -> GET /v2/voices\n- \"List all default?\" -> GET /v1/voices/settings/default\n- \"List all settings?\" -> GET /v1/voices/{voice_id}/settings\n- \"Get voice details?\" -> GET /v1/voices/{voice_id}\n- \"Delete a voice?\" -> DELETE /v1/voices/{voice_id}\n- \"Create a edit?\" -> POST /v1/voices/{voice_id}/settings/edit\n- \"Create a add?\" -> POST /v1/voices/add\n- \"Create a podcast?\" -> POST /v1/studio/podcasts\n- \"Create a video-to-music?\" -> POST /v1/music/video-to-music\n- \"Create a pronunciation-dictionary?\" -> POST /v1/studio/projects/{project_id}/pronunciation-dictionaries\n- \"List all projects?\" -> GET /v1/studio/projects\n- \"Create a project?\" -> POST /v1/studio/projects\n- \"Get project details?\" -> GET /v1/studio/projects/{project_id}\n- \"Delete a project?\" -> DELETE /v1/studio/projects/{project_id}\n- \"Create a content?\" -> POST /v1/studio/projects/{project_id}/content\n- \"Create a convert?\" -> POST /v1/studio/projects/{project_id}/convert\n- \"List all snapshots?\" -> GET /v1/studio/projects/{project_id}/snapshots\n- \"Get snapshot details?\" -> GET /v1/studio/projects/{project_id}/snapshots/{project_snapshot_id}\n- \"Create a archive?\" -> POST /v1/studio/projects/{project_id}/snapshots/{project_snapshot_id}/archive\n- \"List all chapters?\" -> GET /v1/studio/projects/{project_id}/chapters\n- \"Create a chapter?\" -> POST /v1/studio/projects/{project_id}/chapters\n- \"Get chapter details?\" -> GET /v1/studio/projects/{project_id}/chapters/{chapter_id}\n- \"Delete a chapter?\" -> DELETE /v1/studio/projects/{project_id}/chapters/{chapter_id}\n- \"List all muted-tracks?\" -> GET /v1/studio/projects/{project_id}/muted-tracks\n- \"Get resource details?\" -> GET /v1/dubbing/resource/{dubbing_id}\n- \"Create a language?\" -> POST /v1/dubbing/resource/{dubbing_id}/language\n- \"Create a segment?\" -> POST /v1/dubbing/resource/{dubbing_id}/speaker/{speaker_id}/segment\n- \"Partially update a segment?\" -> PATCH /v1/dubbing/resource/{dubbing_id}/segment/{segment_id}/{language}\n- \"Create a migrate-segment?\" -> POST /v1/dubbing/resource/{dubbing_id}/migrate-segments\n- \"Delete a segment?\" -> DELETE /v1/dubbing/resource/{dubbing_id}/segment/{segment_id}\n- \"Create a transcribe?\" -> POST /v1/dubbing/resource/{dubbing_id}/transcribe\n- \"Create a translate?\" -> POST /v1/dubbing/resource/{dubbing_id}/translate\n- \"Create a dub?\" -> POST /v1/dubbing/resource/{dubbing_id}/dub\n- \"Partially update a speaker?\" -> PATCH /v1/dubbing/resource/{dubbing_id}/speaker/{speaker_id}\n- \"Create a speaker?\" -> POST /v1/dubbing/resource/{dubbing_id}/speaker\n- \"List all similar-voices?\" -> GET /v1/dubbing/resource/{dubbing_id}/speaker/{speaker_id}/similar-voices\n- \"List all dubbing?\" -> GET /v1/dubbing\n- \"Create a dubbing?\" -> POST /v1/dubbing\n- \"Get dubbing details?\" -> GET /v1/dubbing/{dubbing_id}\n- \"Delete a dubbing?\" -> DELETE /v1/dubbing/{dubbing_id}\n- \"Get audio details?\" -> GET /v1/dubbing/{dubbing_id}/audio/{language_code}\n- \"Get transcript details?\" -> GET /v1/dubbing/{dubbing_id}/transcript/{language_code}\n- \"Get format details?\" -> GET /v1/dubbing/{dubbing_id}/transcripts/{language_code}/format/{format_type}\n- \"List all models?\" -> GET /v1/models\n- \"Create a audio-native?\" -> POST /v1/audio-native\n- \"Search shared-voices?\" -> GET /v1/shared-voices\n- \"Create a similar-voice?\" -> POST /v1/similar-voices\n- \"List all character-stats?\" -> GET /v1/usage/character-stats\n- \"Create a add-from-file?\" -> POST /v1/pronunciation-dictionaries/add-from-file\n- \"Create a add-from-rule?\" -> POST /v1/pronunciation-dictionaries/add-from-rules\n- \"Partially update a pronunciation-dictionary?\" -> PATCH /v1/pronunciation-dictionaries/{pronunciation_dictionary_id}\n- \"Get pronunciation-dictionary details?\" -> GET /v1/pronunciation-dictionaries/{pronunciation_dictionary_id}\n- \"Create a set-rule?\" -> POST /v1/pronunciation-dictionaries/{pronunciation_dictionary_id}/set-rules\n- \"Create a add-rule?\" -> POST /v1/pronunciation-dictionaries/{pronunciation_dictionary_id}/add-rules\n- \"Create a remove-rule?\" -> POST /v1/pronunciation-dictionaries/{pronunciation_dictionary_id}/remove-rules\n- \"List all download?\" -> GET /v1/pronunciation-dictionaries/{dictionary_id}/{version_id}/download\n- \"List all pronunciation-dictionaries?\" -> GET /v1/pronunciation-dictionaries\n- \"List all api-keys?\" -> GET /v1/service-accounts/{service_account_user_id}/api-keys\n- \"Create a api-key?\" -> POST /v1/service-accounts/{service_account_user_id}/api-keys\n- \"Partially update a api-key?\" -> PATCH /v1/service-accounts/{service_account_user_id}/api-keys/{api_key_id}\n- \"Delete a api-key?\" -> DELETE /v1/service-accounts/{service_account_user_id}/api-keys/{api_key_id}\n- \"Create a auth-connection?\" -> POST /v1/workspace/auth-connections\n- \"List all auth-connections?\" -> GET /v1/workspace/auth-connections\n- \"Delete a auth-connection?\" -> DELETE /v1/workspace/auth-connections/{auth_connection_id}\n- \"List all service-accounts?\" -> GET /v1/service-accounts\n- \"List all groups?\" -> GET /v1/workspace/groups\n- \"List all search?\" -> GET /v1/workspace/groups/search\n- \"Create a remove?\" -> POST /v1/workspace/groups/{group_id}/members/remove\n- \"Create a member?\" -> POST /v1/workspace/groups/{group_id}/members\n- \"Create a add-bulk?\" -> POST /v1/workspace/invites/add-bulk\n- \"Create a share?\" -> POST /v1/workspace/resources/{resource_id}/share\n- \"Create a unshare?\" -> POST /v1/workspace/resources/{resource_id}/unshare\n- \"List all webhooks?\" -> GET /v1/workspace/webhooks\n- \"Create a webhook?\" -> POST /v1/workspace/webhooks\n- \"Partially update a webhook?\" -> PATCH /v1/workspace/webhooks/{webhook_id}\n- \"Delete a webhook?\" -> DELETE /v1/workspace/webhooks/{webhook_id}\n- \"Create a speech-to-text?\" -> POST /v1/speech-to-text\n- \"Delete a transcript?\" -> DELETE /v1/speech-to-text/transcripts/{transcription_id}\n- \"Create a forced-alignment?\" -> POST /v1/forced-alignment\n- \"List all get-signed-url?\" -> GET /v1/convai/conversation/get-signed-url\n- \"List all get_signed_url?\" -> GET /v1/convai/conversation/get_signed_url\n- \"List all token?\" -> GET /v1/convai/conversation/token\n- \"Create a outbound-call?\" -> POST /v1/convai/twilio/outbound-call\n- \"Create a register-call?\" -> POST /v1/convai/twilio/register-call\n- \"Create a outbound-message?\" -> POST /v1/convai/whatsapp/outbound-message\n- \"Create a create?\" -> POST /v1/convai/agents/create\n- \"List all summaries?\" -> GET /v1/convai/agents/summaries\n- \"Get agent details?\" -> GET /v1/convai/agents/{agent_id}\n- \"Partially update a agent?\" -> PATCH /v1/convai/agents/{agent_id}\n- \"Delete a agent?\" -> DELETE /v1/convai/agents/{agent_id}\n- \"List all widget?\" -> GET /v1/convai/agents/{agent_id}/widget\n- \"List all link?\" -> GET /v1/convai/agents/{agent_id}/link\n- \"Create a avatar?\" -> POST /v1/convai/agents/{agent_id}/avatar\n- \"Search agents?\" -> GET /v1/convai/agents\n- \"List all size?\" -> GET /v1/convai/agent/{agent_id}/knowledge-base/size\n- \"Create a calculate?\" -> POST /v1/convai/agent/{agent_id}/llm-usage/calculate\n- \"Create a duplicate?\" -> POST /v1/convai/agents/{agent_id}/duplicate\n- \"Create a simulate-conversation?\" -> POST /v1/convai/agents/{agent_id}/simulate-conversation\n- \"Create a folder?\" -> POST /v1/convai/agent-testing/folders\n- \"Get folder details?\" -> GET /v1/convai/agent-testing/folders/{folder_id}\n- \"Partially update a folder?\" -> PATCH /v1/convai/agent-testing/folders/{folder_id}\n- \"Delete a folder?\" -> DELETE /v1/convai/agent-testing/folders/{folder_id}\n- \"Create a bulk-move?\" -> POST /v1/convai/agent-testing/bulk-move\n- \"Get agent-testing details?\" -> GET /v1/convai/agent-testing/{test_id}\n- \"Update a agent-testing?\" -> PUT /v1/convai/agent-testing/{test_id}\n- \"Delete a agent-testing?\" -> DELETE /v1/convai/agent-testing/{test_id}\n- \"Create a summary?\" -> POST /v1/convai/agent-testing/summaries\n- \"Search agent-testing?\" -> GET /v1/convai/agent-testing\n- \"List all test-invocations?\" -> GET /v1/convai/test-invocations\n- \"Create a run-test?\" -> POST /v1/convai/agents/{agent_id}/run-tests\n- \"Get test-invocation details?\" -> GET /v1/convai/test-invocations/{test_invocation_id}\n- \"Create a resubmit?\" -> POST /v1/convai/test-invocations/{test_invocation_id}/resubmit\n- \"Search conversations?\" -> GET /v1/convai/conversations\n- \"Search users?\" -> GET /v1/convai/users\n- \"Get conversation details?\" -> GET /v1/convai/conversations/{conversation_id}\n- \"Delete a conversation?\" -> DELETE /v1/convai/conversations/{conversation_id}\n- \"Create a feedback?\" -> POST /v1/convai/conversations/{conversation_id}/feedback\n- \"List all text-search?\" -> GET /v1/convai/conversations/messages/text-search\n- \"List all smart-search?\" -> GET /v1/convai/conversations/messages/smart-search\n- \"Create a phone-number?\" -> POST /v1/convai/phone-numbers\n- \"List all phone-numbers?\" -> GET /v1/convai/phone-numbers\n- \"Get phone-number details?\" -> GET /v1/convai/phone-numbers/{phone_number_id}\n- \"Delete a phone-number?\" -> DELETE /v1/convai/phone-numbers/{phone_number_id}\n- \"Partially update a phone-number?\" -> PATCH /v1/convai/phone-numbers/{phone_number_id}\n- \"List all list?\" -> GET /v1/convai/llm/list\n- \"Create a file?\" -> POST /v1/convai/conversations/{conversation_id}/files\n- \"Delete a file?\" -> DELETE /v1/convai/conversations/{conversation_id}/files/{file_id}\n- \"List all live-count?\" -> GET /v1/convai/analytics/live-count\n- \"Create a knowledge-base?\" -> POST /v1/convai/knowledge-base\n- \"Search knowledge-base?\" -> GET /v1/convai/knowledge-base\n- \"Create a url?\" -> POST /v1/convai/knowledge-base/url\n- \"Create a text?\" -> POST /v1/convai/knowledge-base/text\n- \"Partially update a knowledge-base?\" -> PATCH /v1/convai/knowledge-base/{documentation_id}\n- \"Get knowledge-base details?\" -> GET /v1/convai/knowledge-base/{documentation_id}\n- \"Delete a knowledge-base?\" -> DELETE /v1/convai/knowledge-base/{documentation_id}\n- \"Create a rag-index?\" -> POST /v1/convai/knowledge-base/rag-index\n- \"List all rag-index?\" -> GET /v1/convai/knowledge-base/rag-index\n- \"Create a refresh?\" -> POST /v1/convai/knowledge-base/{documentation_id}/refresh\n- \"Delete a rag-index?\" -> DELETE /v1/convai/knowledge-base/{documentation_id}/rag-index/{rag_index_id}\n- \"List all dependent-agents?\" -> GET /v1/convai/knowledge-base/{documentation_id}/dependent-agents\n- \"List all content?\" -> GET /v1/convai/knowledge-base/{documentation_id}/content\n- \"List all source-file-url?\" -> GET /v1/convai/knowledge-base/{documentation_id}/source-file-url\n- \"Get chunk details?\" -> GET /v1/convai/knowledge-base/{documentation_id}/chunk/{chunk_id}\n- \"Create a move?\" -> POST /v1/convai/knowledge-base/{document_id}/move\n- \"Create a tool?\" -> POST /v1/convai/tools\n- \"Search tools?\" -> GET /v1/convai/tools\n- \"Get tool details?\" -> GET /v1/convai/tools/{tool_id}\n- \"Partially update a tool?\" -> PATCH /v1/convai/tools/{tool_id}\n- \"Delete a tool?\" -> DELETE /v1/convai/tools/{tool_id}\n- \"List all dashboard?\" -> GET /v1/convai/settings/dashboard\n- \"Create a secret?\" -> POST /v1/convai/secrets\n- \"List all secrets?\" -> GET /v1/convai/secrets\n- \"Delete a secret?\" -> DELETE /v1/convai/secrets/{secret_id}\n- \"Partially update a secret?\" -> PATCH /v1/convai/secrets/{secret_id}\n- \"Create a submit?\" -> POST /v1/convai/batch-calling/submit\n- \"List all workspace?\" -> GET /v1/convai/batch-calling/workspace\n- \"Get batch-calling details?\" -> GET /v1/convai/batch-calling/{batch_id}\n- \"Delete a batch-calling?\" -> DELETE /v1/convai/batch-calling/{batch_id}\n- \"Create a cancel?\" -> POST /v1/convai/batch-calling/{batch_id}/cancel\n- \"Create a retry?\" -> POST /v1/convai/batch-calling/{batch_id}/retry\n- \"Create a mcp-server?\" -> POST /v1/convai/mcp-servers\n- \"List all mcp-servers?\" -> GET /v1/convai/mcp-servers\n- \"Get mcp-server details?\" -> GET /v1/convai/mcp-servers/{mcp_server_id}\n- \"Delete a mcp-server?\" -> DELETE /v1/convai/mcp-servers/{mcp_server_id}\n- \"Partially update a mcp-server?\" -> PATCH /v1/convai/mcp-servers/{mcp_server_id}\n- \"List all tools?\" -> GET /v1/convai/mcp-servers/{mcp_server_id}/tools\n- \"Create a tool-approval?\" -> POST /v1/convai/mcp-servers/{mcp_server_id}/tool-approvals\n- \"Delete a tool-approval?\" -> DELETE /v1/convai/mcp-servers/{mcp_server_id}/tool-approvals/{tool_name}\n- \"Create a tool-config?\" -> POST /v1/convai/mcp-servers/{mcp_server_id}/tool-configs\n- \"Get tool-config details?\" -> GET /v1/convai/mcp-servers/{mcp_server_id}/tool-configs/{tool_name}\n- \"Partially update a tool-config?\" -> PATCH /v1/convai/mcp-servers/{mcp_server_id}/tool-configs/{tool_name}\n- \"Delete a tool-config?\" -> DELETE /v1/convai/mcp-servers/{mcp_server_id}/tool-configs/{tool_name}\n- \"Get whatsapp-account details?\" -> GET /v1/convai/whatsapp-accounts/{phone_number_id}\n- \"Partially update a whatsapp-account?\" -> PATCH /v1/convai/whatsapp-accounts/{phone_number_id}\n- \"Delete a whatsapp-account?\" -> DELETE /v1/convai/whatsapp-accounts/{phone_number_id}\n- \"List all whatsapp-accounts?\" -> GET /v1/convai/whatsapp-accounts\n- \"Create a branche?\" -> POST /v1/convai/agents/{agent_id}/branches\n- \"List all branches?\" -> GET /v1/convai/agents/{agent_id}/branches\n- \"Get branche details?\" -> GET /v1/convai/agents/{agent_id}/branches/{branch_id}\n- \"Partially update a branche?\" -> PATCH /v1/convai/agents/{agent_id}/branches/{branch_id}\n- \"Create a merge?\" -> POST /v1/convai/agents/{agent_id}/branches/{source_branch_id}/merge\n- \"Create a deployment?\" -> POST /v1/convai/agents/{agent_id}/deployments\n- \"Create a draft?\" -> POST /v1/convai/agents/{agent_id}/drafts\n- \"Create a run?\" -> POST /v1/convai/conversations/{conversation_id}/analysis/run\n- \"Create a environment-variable?\" -> POST /v1/convai/environment-variables\n- \"List all environment-variables?\" -> GET /v1/convai/environment-variables\n- \"Get environment-variable details?\" -> GET /v1/convai/environment-variables/{env_var_id}\n- \"Partially update a environment-variable?\" -> PATCH /v1/convai/environment-variables/{env_var_id}\n- \"Create a plan?\" -> POST /v1/music/plan\n- \"Create a music?\" -> POST /v1/music\n- \"Create a detailed?\" -> POST /v1/music/detailed\n- \"Create a upload?\" -> POST /v1/music/upload\n- \"Create a stem-separation?\" -> POST /v1/music/stem-separation\n- \"Create a pvc?\" -> POST /v1/voices/pvc\n- \"Create a sample?\" -> POST /v1/voices/pvc/{voice_id}/samples\n- \"List all waveform?\" -> GET /v1/voices/pvc/{voice_id}/samples/{sample_id}/waveform\n- \"List all speakers?\" -> GET /v1/voices/pvc/{voice_id}/samples/{sample_id}/speakers\n- \"Create a separate-speaker?\" -> POST /v1/voices/pvc/{voice_id}/samples/{sample_id}/separate-speakers\n- \"List all captcha?\" -> GET /v1/voices/pvc/{voice_id}/captcha\n- \"Create a captcha?\" -> POST /v1/voices/pvc/{voice_id}/captcha\n- \"Create a train?\" -> POST /v1/voices/pvc/{voice_id}/train\n- \"Create a verification?\" -> POST /v1/voices/pvc/{voice_id}/verification\n- \"List all docs?\" -> GET /docs\n- \"How to authenticate?\" -> See Auth section above\n\n## Response Tips\n- Check response schemas in references/api-spec.lap for field details\n- Paginated endpoints accept limit/offset or cursor parameters\n- Create/update endpoints return the modified resource on success\n- Error responses include status codes and descriptions in the spec\n\n## References\n- Full spec: See references/api-spec.lap for complete endpoint details, parameter tables, and response schemas\n\n> Generated from the official API spec by [LAP](https://lap.sh)\n","references/api-spec.lap":"@lap v0.3\n# Machine-readable API spec. Each @endpoint block is one API call.\n@api ElevenLabs API Documentation\n@version 1.0\n@auth ApiKey xi-api-key in header\n@common_fields {xi-api-key: any # Your API key. This is required by most endpoints to access our API programmatically. You can view your xi-api-key using the 'Profile' tab on the website.}\n@endpoints 268\n@hint download_for_search\n@toc history(5), sound-generation(1), audio-isolation(2), voices(26), text-to-speech(4), text-to-dialogue(4), speech-to-speech(2), text-to-voice(5), user(2), studio(23), music(7), dubbing(20), models(1), audio-native(4), shared-voices(1), similar-voices(1), usage(1), pronunciation-dictionaries(9), service-accounts(5), workspace(18), speech-to-text(3), single-use-token(1), forced-alignment(1), convai(121), docs(1)\n\n@group history\n@endpoint GET /v1/history\n@desc List Generated Items\n@optional {page_size: int=100 # How many history items to return at maximum. Can not exceed 1000, defaults to 100., start_after_history_item_id: any # After which ID to start fetching, use this parameter to paginate across a large collection of history items. In case this parameter is not provided history items will be fetched starting from the most recently created one ordered descending by their creation date., voice_id: any # Voice ID to be filtered for, you can use GET https://api.elevenlabs.io/v1/voices to receive a list of voices and their IDs., model_id: any # Model ID to filter history items by., date_before_unix: any # Unix timestamp to filter history items before this date (exclusive)., date_after_unix: any # Unix timestamp to filter history items after this date (inclusive)., sort_direction: any=desc # Sort direction for the results., search: any # search term used for filtering, source: any # Source of the generated history item}\n@returns(200) {history: [map], last_history_item_id: any, has_more: bool, scanned_until: any} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint GET /v1/history/{history_item_id}\n@desc Get History Item\n@required {history_item_id: str # History item ID to be used, you can use GET https://api.elevenlabs.io/v1/history to receive a list of history items and their IDs.}\n@returns(200) {history_item_id: str, request_id: any, voice_id: any, model_id: any, voice_name: any, voice_category: any, text: any, date_unix: int, character_count_change_from: int, character_count_change_to: int, content_type: str, state: str, settings: any, feedback: any, share_link_id: any, source: any, alignments: any, dialogue: any, output_format: any} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint DELETE /v1/history/{history_item_id}\n@desc Delete History Item\n@required {history_item_id: str # History item ID to be used, you can use GET https://api.elevenlabs.io/v1/history to receive a list of history items and their IDs.}\n@returns(200) {status: str} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint GET /v1/history/{history_item_id}/audio\n@desc Get Audio From History Item\n@required {history_item_id: str # History item ID to be used, you can use GET https://api.elevenlabs.io/v1/history to receive a list of history items and their IDs.}\n@returns(200) The audio file of the history item.\n@errors {422: Validation Error}\n\n@endpoint POST /v1/history/download\n@desc Download History Items\n@required {history_item_ids: [str] # A list of history items to download, you can get IDs of history items and other metadata using the GET https://api.elevenlabs.io/v1/history endpoint.}\n@optional {output_format: any # Output format to transcode the audio file, can be wav or default.}\n@returns(200) The requested audio file, or a zip file containing multiple audio files when multiple history items are requested.\n@errors {400: Invalid request, 422: Validation Error}\n\n@endgroup\n\n@group sound-generation\n@endpoint POST /v1/sound-generation\n@desc Sound Generation\n@required {text: str # The text that will get converted into a sound effect.}\n@optional {output_format: str(mp3_22050_32/mp3_24000_48/mp3_44100_32/mp3_44100_64/mp3_44100_96/mp3_44100_128/mp3_44100_192/pcm_8000/pcm_16000/pcm_22050/pcm_24000/pcm_32000/pcm_44100/pcm_48000/ulaw_8000/alaw_8000/opus_48000_32/opus_48000_64/opus_48000_96/opus_48000_128/opus_48000_192)=mp3_44100_128 # Output format of the generated audio. Formatted as codec_sample_rate_bitrate. So an mp3 with 22.05kHz sample rate at 32kbs is represented as mp3_22050_32. MP3 with 192kbps bitrate requires you to be subscribed to Creator tier or above. PCM with 44.1kHz sample rate requires you to be subscribed to Pro tier or above. Note that the μ-law format (sometimes written mu-law, often approximated as u-law) is commonly used for Twilio audio inputs., loop: bool=false # Whether to create a sound effect that loops smoothly. Only available for the 'eleven_text_to_sound_v2 model'., duration_seconds: any # The duration of the sound which will be generated in seconds. Must be at least 0.5 and at most 30. If set to None we will guess the optimal duration using the prompt. Defaults to None., prompt_influence: any=0.3 # A higher prompt influence makes your generation follow the prompt more closely while also making generations less variable. Must be a value between 0 and 1. Defaults to 0.3., model_id: str=eleven_text_to_sound_v2 # The model ID to use for the sound generation.}\n@returns(200) The generated sound effect as an MP3 file\n@errors {422: Validation Error}\n\n@endgroup\n\n@group audio-isolation\n@endpoint POST /v1/audio-isolation\n@desc Audio Isolation\n@returns(200) Successful Response\n@errors {422: Validation Error}\n\n@endpoint POST /v1/audio-isolation/stream\n@desc Audio Isolation Stream\n@returns(200) Successful Response\n@errors {422: Validation Error}\n\n@endgroup\n\n@group voices\n@endpoint DELETE /v1/voices/{voice_id}/samples/{sample_id}\n@desc Delete Sample\n@required {voice_id: str # Voice ID to be used, you can use https://api.elevenlabs.io/v1/voices to list all the available voices., sample_id: str # Sample ID to be used, you can use GET https://api.elevenlabs.io/v1/voices/{voice_id} to list all the available samples for a voice.}\n@returns(200) {status: str} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint GET /v1/voices/{voice_id}/samples/{sample_id}/audio\n@desc Get Audio From Sample\n@required {voice_id: str # Voice ID to be used, you can use https://api.elevenlabs.io/v1/voices to list all the available voices., sample_id: str # Sample ID to be used, you can use GET https://api.elevenlabs.io/v1/voices/{voice_id} to list all the available samples for a voice.}\n@returns(200) Successful Response\n@errors {422: Validation Error}\n\n@endgroup\n\n@group text-to-speech\n@endpoint POST /v1/text-to-speech/{voice_id}\n@desc Text To Speech\n@required {voice_id: str # Voice ID to be used, you can use https://api.elevenlabs.io/v1/voices to list all the available voices., text: str # The text that will get converted into speech.}\n@optional {enable_logging: bool=true # When enable_logging is set to false zero retention mode will be used for the request. This will mean history features are unavailable for this request, including request stitching. Zero retention mode may only be used by enterprise customers., optimize_streaming_latency: any # You can turn on latency optimizations at some cost of quality. The best possible final latency varies by model. Possible values: 0 - default mode (no latency optimizations) 1 - normal latency optimizations (about 50% of possible latency improvement of option 3) 2 - strong latency optimizations (about 75% of possible latency improvement of option 3) 3 - max latency optimizations 4 - max latency optimizations, but also with text normalizer turned off for even more latency savings (best latency, but can mispronounce eg numbers and dates).  Defaults to None., output_format: str(alaw_8000/mp3_22050_32/mp3_24000_48/mp3_44100_128/mp3_44100_192/mp3_44100_32/mp3_44100_64/mp3_44100_96/opus_48000_128/opus_48000_192/opus_48000_32/opus_48000_64/opus_48000_96/pcm_16000/pcm_22050/pcm_24000/pcm_32000/pcm_44100/pcm_48000/pcm_8000/ulaw_8000/wav_16000/wav_22050/wav_24000/wav_32000/wav_44100/wav_48000/wav_8000)=mp3_44100_128 # Output format of the generated audio. Formatted as codec_sample_rate_bitrate. So an mp3 with 22.05kHz sample rate at 32kbs is represented as mp3_22050_32. MP3 with 192kbps bitrate requires you to be subscribed to Creator tier or above. PCM and WAV formats with 44.1kHz sample rate requires you to be subscribed to Pro tier or above. Note that the μ-law format (sometimes written mu-law, often approximated as u-law) is commonly used for Twilio audio inputs., model_id: str=eleven_multilingual_v2 # Identifier of the model that will be used, you can query them using GET /v1/models. The model needs to have support for text to speech, you can check this using the can_do_text_to_speech property., language_code: any # Language code (ISO 639-1) used to enforce a language for the model and text normalization. If the model does not support provided language code, an error will be returned., voice_settings: any # Voice settings overriding stored settings for the given voice. They are applied only on the given request., pronunciation_dictionary_locators: any # A list of pronunciation dictionary locators (id, version_id) to be applied to the text. They will be applied in order. You may have up to 3 locators per request, seed: any # If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result. Determinism is not guaranteed. Must be integer between 0 and 4294967295., previous_text: any # The text that came before the text of the current request. Can be used to improve the speech's continuity when concatenating together multiple generations or to influence the speech's continuity in the current generation., next_text: any # The text that comes after the text of the current request. Can be used to improve the speech's continuity when concatenating together multiple generations or to influence the speech's continuity in the current generation., previous_request_ids: any # A list of request_id of the samples that were generated before this generation. Can be used to improve the speech's continuity when splitting up a large task into multiple requests. The results will be best when the same model is used across the generations. In case both previous_text and previous_request_ids is send, previous_text will be ignored. A maximum of 3 request_ids can be send., next_request_ids: any # A list of request_id of the samples that come after this generation. next_request_ids is especially useful for maintaining the speech's continuity when regenerating a sample that has had some audio quality issues. For example, if you have generated 3 speech clips, and you want to improve clip 2, passing the request id of clip 3 as a next_request_id (and that of clip 1 as a previous_request_id) will help maintain natural flow in the combined speech. The results will be best when the same model is used across the generations. In case both next_text and next_request_ids is send, next_text will be ignored. A maximum of 3 request_ids can be send., use_pvc_as_ivc: bool=false # If true, we won't use PVC version of the voice for the generation but the IVC version. This is a temporary workaround for higher latency in PVC versions., apply_text_normalization: str(auto/on/off)=auto # This parameter controls text normalization with three modes: 'auto', 'on', and 'off'. When set to 'auto', the system will automatically decide whether to apply text normalization (e.g., spelling out numbers). With 'on', text normalization will always be applied, while with 'off', it will be skipped., apply_language_text_normalization: bool=false # This parameter controls language text normalization. This helps with proper pronunciation of text in some supported languages. WARNING: This parameter can heavily increase the latency of the request. Currently only supported for Japanese.}\n@returns(200) The generated audio file\n@errors {422: Validation Error}\n\n@endpoint POST /v1/text-to-speech/{voice_id}/with-timestamps\n@desc Text To Speech With Timestamps\n@required {voice_id: str # Voice ID to be used, you can use https://api.elevenlabs.io/v1/voices to list all the available voices., text: str # The text that will get converted into speech.}\n@optional {enable_logging: bool=true # When enable_logging is set to false zero retention mode will be used for the request. This will mean history features are unavailable for this request, including request stitching. Zero retention mode may only be used by enterprise customers., optimize_streaming_latency: any # You can turn on latency optimizations at some cost of quality. The best possible final latency varies by model. Possible values: 0 - default mode (no latency optimizations) 1 - normal latency optimizations (about 50% of possible latency improvement of option 3) 2 - strong latency optimizations (about 75% of possible latency improvement of option 3) 3 - max latency optimizations 4 - max latency optimizations, but also with text normalizer turned off for even more latency savings (best latency, but can mispronounce eg numbers and dates).  Defaults to None., output_format: str(alaw_8000/mp3_22050_32/mp3_24000_48/mp3_44100_128/mp3_44100_192/mp3_44100_32/mp3_44100_64/mp3_44100_96/opus_48000_128/opus_48000_192/opus_48000_32/opus_48000_64/opus_48000_96/pcm_16000/pcm_22050/pcm_24000/pcm_32000/pcm_44100/pcm_48000/pcm_8000/ulaw_8000/wav_16000/wav_22050/wav_24000/wav_32000/wav_44100/wav_48000/wav_8000)=mp3_44100_128 # Output format of the generated audio. Formatted as codec_sample_rate_bitrate. So an mp3 with 22.05kHz sample rate at 32kbs is represented as mp3_22050_32. MP3 with 192kbps bitrate requires you to be subscribed to Creator tier or above. PCM and WAV formats with 44.1kHz sample rate requires you to be subscribed to Pro tier or above. Note that the μ-law format (sometimes written mu-law, often approximated as u-law) is commonly used for Twilio audio inputs., model_id: str=eleven_multilingual_v2 # Identifier of the model that will be used, you can query them using GET /v1/models. The model needs to have support for text to speech, you can check this using the can_do_text_to_speech property., language_code: any # Language code (ISO 639-1) used to enforce a language for the model and text normalization. If the model does not support provided language code, an error will be returned., voice_settings: any # Voice settings overriding stored settings for the given voice. They are applied only on the given request., pronunciation_dictionary_locators: [map{pronunciation_dictionary_id!: str, version_id: any}] # A list of pronunciation dictionary locators (id, version_id) to be applied to the text. They will be applied in order. You may have up to 3 locators per request, seed: any # If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result. Determinism is not guaranteed. Must be integer between 0 and 4294967295., previous_text: any # The text that came before the text of the current request. Can be used to improve the speech's continuity when concatenating together multiple generations or to influence the speech's continuity in the current generation., next_text: any # The text that comes after the text of the current request. Can be used to improve the speech's continuity when concatenating together multiple generations or to influence the speech's continuity in the current generation., previous_request_ids: [str] # A list of request_id of the samples that were generated before this generation. Can be used to improve the speech's continuity when splitting up a large task into multiple requests. The results will be best when the same model is used across the generations. In case both previous_text and previous_request_ids is send, previous_text will be ignored. A maximum of 3 request_ids can be send., next_request_ids: [str] # A list of request_id of the samples that come after this generation. next_request_ids is especially useful for maintaining the speech's continuity when regenerating a sample that has had some audio quality issues. For example, if you have generated 3 speech clips, and you want to improve clip 2, passing the request id of clip 3 as a next_request_id (and that of clip 1 as a previous_request_id) will help maintain natural flow in the combined speech. The results will be best when the same model is used across the generations. In case both next_text and next_request_ids is send, next_text will be ignored. A maximum of 3 request_ids can be send., use_pvc_as_ivc: bool=false # If true, we won't use PVC version of the voice for the generation but the IVC version. This is a temporary workaround for higher latency in PVC versions., apply_text_normalization: str(auto/on/off)=auto # This parameter controls text normalization with three modes: 'auto', 'on', and 'off'. When set to 'auto', the system will automatically decide whether to apply text normalization (e.g., spelling out numbers). With 'on', text normalization will always be applied, while with 'off', it will be skipped., apply_language_text_normalization: bool=false # This parameter controls language text normalization. This helps with proper pronunciation of text in some supported languages. WARNING: This parameter can heavily increase the latency of the request. Currently only supported for Japanese.}\n@returns(200) {audio_base64: str, alignment: any, normalized_alignment: any} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint POST /v1/text-to-speech/{voice_id}/stream\n@desc Text To Speech Streaming\n@required {voice_id: str # Voice ID to be used, you can use https://api.elevenlabs.io/v1/voices to list all the available voices., text: str # The text that will get converted into speech.}\n@optional {enable_logging: bool=true # When enable_logging is set to false zero retention mode will be used for the request. This will mean history features are unavailable for this request, including request stitching. Zero retention mode may only be used by enterprise customers., optimize_streaming_latency: any # You can turn on latency optimizations at some cost of quality. The best possible final latency varies by model. Possible values: 0 - default mode (no latency optimizations) 1 - normal latency optimizations (about 50% of possible latency improvement of option 3) 2 - strong latency optimizations (about 75% of possible latency improvement of option 3) 3 - max latency optimizations 4 - max latency optimizations, but also with text normalizer turned off for even more latency savings (best latency, but can mispronounce eg numbers and dates).  Defaults to None., output_format: str(mp3_22050_32/mp3_24000_48/mp3_44100_32/mp3_44100_64/mp3_44100_96/mp3_44100_128/mp3_44100_192/pcm_8000/pcm_16000/pcm_22050/pcm_24000/pcm_32000/pcm_44100/pcm_48000/ulaw_8000/alaw_8000/opus_48000_32/opus_48000_64/opus_48000_96/opus_48000_128/opus_48000_192)=mp3_44100_128 # Output format of the generated audio. Formatted as codec_sample_rate_bitrate. So an mp3 with 22.05kHz sample rate at 32kbs is represented as mp3_22050_32. MP3 with 192kbps bitrate requires you to be subscribed to Creator tier or above. PCM with 44.1kHz sample rate requires you to be subscribed to Pro tier or above. Note that the μ-law format (sometimes written mu-law, often approximated as u-law) is commonly used for Twilio audio inputs., model_id: str=eleven_multilingual_v2 # Identifier of the model that will be used, you can query them using GET /v1/models. The model needs to have support for text to speech, you can check this using the can_do_text_to_speech property., language_code: any # Language code (ISO 639-1) used to enforce a language for the model and text normalization. If the model does not support provided language code, an error will be returned., voice_settings: any # Voice settings overriding stored settings for the given voice. They are applied only on the given request., pronunciation_dictionary_locators: any # A list of pronunciation dictionary locators (id, version_id) to be applied to the text. They will be applied in order. You may have up to 3 locators per request, seed: any # If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result. Determinism is not guaranteed. Must be integer between 0 and 4294967295., previous_text: any # The text that came before the text of the current request. Can be used to improve the speech's continuity when concatenating together multiple generations or to influence the speech's continuity in the current generation., next_text: any # The text that comes after the text of the current request. Can be used to improve the speech's continuity when concatenating together multiple generations or to influence the speech's continuity in the current generation., previous_request_ids: any # A list of request_id of the samples that were generated before this generation. Can be used to improve the speech's continuity when splitting up a large task into multiple requests. The results will be best when the same model is used across the generations. In case both previous_text and previous_request_ids is send, previous_text will be ignored. A maximum of 3 request_ids can be send., next_request_ids: any # A list of request_id of the samples that come after this generation. next_request_ids is especially useful for maintaining the speech's continuity when regenerating a sample that has had some audio quality issues. For example, if you have generated 3 speech clips, and you want to improve clip 2, passing the request id of clip 3 as a next_request_id (and that of clip 1 as a previous_request_id) will help maintain natural flow in the combined speech. The results will be best when the same model is used across the generations. In case both next_text and next_request_ids is send, next_text will be ignored. A maximum of 3 request_ids can be send., use_pvc_as_ivc: bool=false # If true, we won't use PVC version of the voice for the generation but the IVC version. This is a temporary workaround for higher latency in PVC versions., apply_text_normalization: str(auto/on/off)=auto # This parameter controls text normalization with three modes: 'auto', 'on', and 'off'. When set to 'auto', the system will automatically decide whether to apply text normalization (e.g., spelling out numbers). With 'on', text normalization will always be applied, while with 'off', it will be skipped., apply_language_text_normalization: bool=false # This parameter controls language text normalization. This helps with proper pronunciation of text in some supported languages. WARNING: This parameter can heavily increase the latency of the request. Currently only supported for Japanese.}\n@returns(200) Streaming audio data\n@errors {422: Validation Error}\n\n@endpoint POST /v1/text-to-speech/{voice_id}/stream/with-timestamps\n@desc Text To Speech Streaming With Timestamps\n@required {voice_id: str # Voice ID to be used, you can use https://api.elevenlabs.io/v1/voices to list all the available voices., text: str # The text that will get converted into speech.}\n@optional {enable_logging: bool=true # When enable_logging is set to false zero retention mode will be used for the request. This will mean history features are unavailable for this request, including request stitching. Zero retention mode may only be used by enterprise customers., optimize_streaming_latency: any # You can turn on latency optimizations at some cost of quality. The best possible final latency varies by model. Possible values: 0 - default mode (no latency optimizations) 1 - normal latency optimizations (about 50% of possible latency improvement of option 3) 2 - strong latency optimizations (about 75% of possible latency improvement of option 3) 3 - max latency optimizations 4 - max latency optimizations, but also with text normalizer turned off for even more latency savings (best latency, but can mispronounce eg numbers and dates).  Defaults to None., output_format: str(mp3_22050_32/mp3_24000_48/mp3_44100_32/mp3_44100_64/mp3_44100_96/mp3_44100_128/mp3_44100_192/pcm_8000/pcm_16000/pcm_22050/pcm_24000/pcm_32000/pcm_44100/pcm_48000/ulaw_8000/alaw_8000/opus_48000_32/opus_48000_64/opus_48000_96/opus_48000_128/opus_48000_192)=mp3_44100_128 # Output format of the generated audio. Formatted as codec_sample_rate_bitrate. So an mp3 with 22.05kHz sample rate at 32kbs is represented as mp3_22050_32. MP3 with 192kbps bitrate requires you to be subscribed to Creator tier or above. PCM with 44.1kHz sample rate requires you to be subscribed to Pro tier or above. Note that the μ-law format (sometimes written mu-law, often approximated as u-law) is commonly used for Twilio audio inputs., model_id: str=eleven_multilingual_v2 # Identifier of the model that will be used, you can query them using GET /v1/models. The model needs to have support for text to speech, you can check this using the can_do_text_to_speech property., language_code: any # Language code (ISO 639-1) used to enforce a language for the model and text normalization. If the model does not support provided language code, an error will be returned., voice_settings: any # Voice settings overriding stored settings for the given voice. They are applied only on the given request., pronunciation_dictionary_locators: any # A list of pronunciation dictionary locators (id, version_id) to be applied to the text. They will be applied in order. You may have up to 3 locators per request, seed: any # If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result. Determinism is not guaranteed. Must be integer between 0 and 4294967295., previous_text: any # The text that came before the text of the current request. Can be used to improve the speech's continuity when concatenating together multiple generations or to influence the speech's continuity in the current generation., next_text: any # The text that comes after the text of the current request. Can be used to improve the speech's continuity when concatenating together multiple generations or to influence the speech's continuity in the current generation., previous_request_ids: any # A list of request_id of the samples that were generated before this generation. Can be used to improve the speech's continuity when splitting up a large task into multiple requests. The results will be best when the same model is used across the generations. In case both previous_text and previous_request_ids is send, previous_text will be ignored. A maximum of 3 request_ids can be send., next_request_ids: any # A list of request_id of the samples that come after this generation. next_request_ids is especially useful for maintaining the speech's continuity when regenerating a sample that has had some audio quality issues. For example, if you have generated 3 speech clips, and you want to improve clip 2, passing the request id of clip 3 as a next_request_id (and that of clip 1 as a previous_request_id) will help maintain natural flow in the combined speech. The results will be best when the same model is used across the generations. In case both next_text and next_request_ids is send, next_text will be ignored. A maximum of 3 request_ids can be send., use_pvc_as_ivc: bool=false # If true, we won't use PVC version of the voice for the generation but the IVC version. This is a temporary workaround for higher latency in PVC versions., apply_text_normalization: str(auto/on/off)=auto # This parameter controls text normalization with three modes: 'auto', 'on', and 'off'. When set to 'auto', the system will automatically decide whether to apply text normalization (e.g., spelling out numbers). With 'on', text normalization will always be applied, while with 'off', it will be skipped., apply_language_text_normalization: bool=false # This parameter controls language text normalization. This helps with proper pronunciation of text in some supported languages. WARNING: This parameter can heavily increase the latency of the request. Currently only supported for Japanese.}\n@returns(200) {audio_base64: str, alignment: any, normalized_alignment: any} # Stream of transcription chunks\n@errors {422: Validation Error}\n\n@endgroup\n\n@group text-to-dialogue\n@endpoint POST /v1/text-to-dialogue\n@desc Text To Dialogue (Multi-Voice)\n@required {inputs: [map{text!: str, voice_id!: str}] # A list of dialogue inputs, each containing text and a voice ID which will be converted into speech. The maximum number of unique voice IDs is 10.}\n@optional {output_format: any(alaw_8000/mp3_22050_32/mp3_24000_48/mp3_44100_128/mp3_44100_192/mp3_44100_32/mp3_44100_64/mp3_44100_96/opus_48000_128/opus_48000_192/opus_48000_32/opus_48000_64/opus_48000_96/pcm_16000/pcm_22050/pcm_24000/pcm_32000/pcm_44100/pcm_48000/pcm_8000/ulaw_8000/wav_16000/wav_22050/wav_24000/wav_32000/wav_44100/wav_48000/wav_8000)=mp3_44100_128 # Output format of the generated audio. Formatted as codec_sample_rate_bitrate. So an mp3 with 22.05kHz sample rate at 32kbs is represented as mp3_22050_32. MP3 with 192kbps bitrate requires you to be subscribed to Creator tier or above. PCM and WAV formats with 44.1kHz sample rate requires you to be subscribed to Pro tier or above. Note that the μ-law format (sometimes written mu-law, often approximated as u-law) is commonly used for Twilio audio inputs., model_id: str=eleven_v3 # Identifier of the model that will be used, you can query them using GET /v1/models. The model needs to have support for text to speech, you can check this using the can_do_text_to_speech property., language_code: any # Language code (ISO 639-1) used to enforce a language for the model and text normalization. If the model does not support provided language code, an error will be returned., settings: any # Settings controlling the dialogue generation., pronunciation_dictionary_locators: any # A list of pronunciation dictionary locators (id, version_id) to be applied to the text. They will be applied in order. You may have up to 3 locators per request, seed: any # If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result. Determinism is not guaranteed. Must be integer between 0 and 4294967295., apply_text_normalization: str(auto/on/off)=auto # This parameter controls text normalization with three modes: 'auto', 'on', and 'off'. When set to 'auto', the system will automatically decide whether to apply text normalization (e.g., spelling out numbers). With 'on', text normalization will always be applied, while with 'off', it will be skipped.}\n@returns(200) The generated audio file\n@errors {422: Validation Error}\n\n@endpoint POST /v1/text-to-dialogue/stream\n@desc Text To Dialogue (Multi-Voice) Streaming\n@required {inputs: [map{text!: str, voice_id!: str}] # A list of dialogue inputs, each containing text and a voice ID which will be converted into speech. The maximum number of unique voice IDs is 10.}\n@optional {output_format: str(mp3_22050_32/mp3_24000_48/mp3_44100_32/mp3_44100_64/mp3_44100_96/mp3_44100_128/mp3_44100_192/pcm_8000/pcm_16000/pcm_22050/pcm_24000/pcm_32000/pcm_44100/pcm_48000/ulaw_8000/alaw_8000/opus_48000_32/opus_48000_64/opus_48000_96/opus_48000_128/opus_48000_192)=mp3_44100_128 # Output format of the generated audio. Formatted as codec_sample_rate_bitrate. So an mp3 with 22.05kHz sample rate at 32kbs is represented as mp3_22050_32. MP3 with 192kbps bitrate requires you to be subscribed to Creator tier or above. PCM with 44.1kHz sample rate requires you to be subscribed to Pro tier or above. Note that the μ-law format (sometimes written mu-law, often approximated as u-law) is commonly used for Twilio audio inputs., model_id: str=eleven_v3 # Identifier of the model that will be used, you can query them using GET /v1/models. The model needs to have support for text to speech, you can check this using the can_do_text_to_speech property., language_code: any # Language code (ISO 639-1) used to enforce a language for the model and text normalization. If the model does not support provided language code, an error will be returned., settings: any # Settings controlling the dialogue generation., pronunciation_dictionary_locators: any # A list of pronunciation dictionary locators (id, version_id) to be applied to the text. They will be applied in order. You may have up to 3 locators per request, seed: any # If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result. Determinism is not guaranteed. Must be integer between 0 and 4294967295., apply_text_normalization: str(auto/on/off)=auto # This parameter controls text normalization with three modes: 'auto', 'on', and 'off'. When set to 'auto', the system will automatically decide whether to apply text normalization (e.g., spelling out numbers). With 'on', text normalization will always be applied, while with 'off', it will be skipped.}\n@returns(200) Streaming audio data\n@errors {422: Validation Error}\n\n@endpoint POST /v1/text-to-dialogue/stream/with-timestamps\n@desc Text To Dialogue Streaming With Timestamps\n@required {inputs: [map{text!: str, voice_id!: str}] # A list of dialogue inputs, each containing text and a voice ID which will be converted into speech. The maximum number of unique voice IDs is 10.}\n@optional {output_format: str(mp3_22050_32/mp3_24000_48/mp3_44100_32/mp3_44100_64/mp3_44100_96/mp3_44100_128/mp3_44100_192/pcm_8000/pcm_16000/pcm_22050/pcm_24000/pcm_32000/pcm_44100/pcm_48000/ulaw_8000/alaw_8000/opus_48000_32/opus_48000_64/opus_48000_96/opus_48000_128/opus_48000_192)=mp3_44100_128 # Output format of the generated audio. Formatted as codec_sample_rate_bitrate. So an mp3 with 22.05kHz sample rate at 32kbs is represented as mp3_22050_32. MP3 with 192kbps bitrate requires you to be subscribed to Creator tier or above. PCM with 44.1kHz sample rate requires you to be subscribed to Pro tier or above. Note that the μ-law format (sometimes written mu-law, often approximated as u-law) is commonly used for Twilio audio inputs., model_id: str=eleven_v3 # Identifier of the model that will be used, you can query them using GET /v1/models. The model needs to have support for text to speech, you can check this using the can_do_text_to_speech property., language_code: any # Language code (ISO 639-1) used to enforce a language for the model and text normalization. If the model does not support provided language code, an error will be returned., settings: any # Settings controlling the dialogue generation., pronunciation_dictionary_locators: any # A list of pronunciation dictionary locators (id, version_id) to be applied to the text. They will be applied in order. You may have up to 3 locators per request, seed: any # If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result. Determinism is not guaranteed. Must be integer between 0 and 4294967295., apply_text_normalization: str(auto/on/off)=auto # This parameter controls text normalization with three modes: 'auto', 'on', and 'off'. When set to 'auto', the system will automatically decide whether to apply text normalization (e.g., spelling out numbers). With 'on', text normalization will always be applied, while with 'off', it will be skipped.}\n@returns(200) {audio_base64: str, alignment: any, normalized_alignment: any, voice_segments: [map]} # Stream of transcription chunks\n@errors {422: Validation Error}\n\n@endpoint POST /v1/text-to-dialogue/with-timestamps\n@desc Text To Dialogue With Timestamps\n@required {inputs: [map{text!: str, voice_id!: str}] # A list of dialogue inputs, each containing text and a voice ID which will be converted into speech. The maximum number of unique voice IDs is 10.}\n@optional {output_format: any(alaw_8000/mp3_22050_32/mp3_24000_48/mp3_44100_128/mp3_44100_192/mp3_44100_32/mp3_44100_64/mp3_44100_96/opus_48000_128/opus_48000_192/opus_48000_32/opus_48000_64/opus_48000_96/pcm_16000/pcm_22050/pcm_24000/pcm_32000/pcm_44100/pcm_48000/pcm_8000/ulaw_8000/wav_16000/wav_22050/wav_24000/wav_32000/wav_44100/wav_48000/wav_8000)=mp3_44100_128 # Output format of the generated audio. Formatted as codec_sample_rate_bitrate. So an mp3 with 22.05kHz sample rate at 32kbs is represented as mp3_22050_32. MP3 with 192kbps bitrate requires you to be subscribed to Creator tier or above. PCM and WAV formats with 44.1kHz sample rate requires you to be subscribed to Pro tier or above. Note that the μ-law format (sometimes written mu-law, often approximated as u-law) is commonly used for Twilio audio inputs., model_id: str=eleven_v3 # Identifier of the model that will be used, you can query them using GET /v1/models. The model needs to have support for text to speech, you can check this using the can_do_text_to_speech property., language_code: any # Language code (ISO 639-1) used to enforce a language for the model and text normalization. If the model does not support provided language code, an error will be returned., settings: any # Settings controlling the dialogue generation., pronunciation_dictionary_locators: any # A list of pronunciation dictionary locators (id, version_id) to be applied to the text. They will be applied in order. You may have up to 3 locators per request, seed: any # If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result. Determinism is not guaranteed. Must be integer between 0 and 4294967295., apply_text_normalization: str(auto/on/off)=auto # This parameter controls text normalization with three modes: 'auto', 'on', and 'off'. When set to 'auto', the system will automatically decide whether to apply text normalization (e.g., spelling out numbers). With 'on', text normalization will always be applied, while with 'off', it will be skipped.}\n@returns(200) {audio_base64: str, alignment: any, normalized_alignment: any, voice_segments: [map]} # Successful Response\n@errors {422: Validation Error}\n\n@endgroup\n\n@group speech-to-speech\n@endpoint POST /v1/speech-to-speech/{voice_id}\n@desc Speech To Speech\n@required {voice_id: str # Voice ID to be used, you can use https://api.elevenlabs.io/v1/voices to list all the available voices.}\n@optional {enable_logging: bool=true # When enable_logging is set to false zero retention mode will be used for the request. This will mean history features are unavailable for this request, including request stitching. Zero retention mode may only be used by enterprise customers., optimize_streaming_latency: any # You can turn on latency optimizations at some cost of quality. The best possible final latency varies by model. Possible values: 0 - default mode (no latency optimizations) 1 - normal latency optimizations (about 50% of possible latency improvement of option 3) 2 - strong latency optimizations (about 75% of possible latency improvement of option 3) 3 - max latency optimizations 4 - max latency optimizations, but also with text normalizer turned off for even more latency savings (best latency, but can mispronounce eg numbers and dates).  Defaults to None., output_format: str(mp3_22050_32/mp3_24000_48/mp3_44100_32/mp3_44100_64/mp3_44100_96/mp3_44100_128/mp3_44100_192/pcm_8000/pcm_16000/pcm_22050/pcm_24000/pcm_32000/pcm_44100/pcm_48000/ulaw_8000/alaw_8000/opus_48000_32/opus_48000_64/opus_48000_96/opus_48000_128/opus_48000_192)=mp3_44100_128 # Output format of the generated audio. Formatted as codec_sample_rate_bitrate. So an mp3 with 22.05kHz sample rate at 32kbs is represented as mp3_22050_32. MP3 with 192kbps bitrate requires you to be subscribed to Creator tier or above. PCM with 44.1kHz sample rate requires you to be subscribed to Pro tier or above. Note that the μ-law format (sometimes written mu-law, often approximated as u-law) is commonly used for Twilio audio inputs.}\n@returns(200) The generated audio file\n@errors {422: Validation Error}\n\n@endpoint POST /v1/speech-to-speech/{voice_id}/stream\n@desc Speech To Speech Streaming\n@required {voice_id: str # Voice ID to be used, you can use https://api.elevenlabs.io/v1/voices to list all the available voices.}\n@optional {enable_logging: bool=true # When enable_logging is set to false zero retention mode will be used for the request. This will mean history features are unavailable for this request, including request stitching. Zero retention mode may only be used by enterprise customers., optimize_streaming_latency: any # You can turn on latency optimizations at some cost of quality. The best possible final latency varies by model. Possible values: 0 - default mode (no latency optimizations) 1 - normal latency optimizations (about 50% of possible latency improvement of option 3) 2 - strong latency optimizations (about 75% of possible latency improvement of option 3) 3 - max latency optimizations 4 - max latency optimizations, but also with text normalizer turned off for even more latency savings (best latency, but can mispronounce eg numbers and dates).  Defaults to None., output_format: str(mp3_22050_32/mp3_24000_48/mp3_44100_32/mp3_44100_64/mp3_44100_96/mp3_44100_128/mp3_44100_192/pcm_8000/pcm_16000/pcm_22050/pcm_24000/pcm_32000/pcm_44100/pcm_48000/ulaw_8000/alaw_8000/opus_48000_32/opus_48000_64/opus_48000_96/opus_48000_128/opus_48000_192)=mp3_44100_128 # Output format of the generated audio. Formatted as codec_sample_rate_bitrate. So an mp3 with 22.05kHz sample rate at 32kbs is represented as mp3_22050_32. MP3 with 192kbps bitrate requires you to be subscribed to Creator tier or above. PCM with 44.1kHz sample rate requires you to be subscribed to Pro tier or above. Note that the μ-law format (sometimes written mu-law, often approximated as u-law) is commonly used for Twilio audio inputs.}\n@returns(200) Streaming audio data\n@errors {422: Validation Error}\n\n@endgroup\n\n@group text-to-voice\n@endpoint POST /v1/text-to-voice/create-previews\n@desc Generate A Voice Preview From Description\n@required {voice_description: str # Description to use for the created voice.}\n@optional {output_format: str(mp3_22050_32/mp3_24000_48/mp3_44100_32/mp3_44100_64/mp3_44100_96/mp3_44100_128/mp3_44100_192/pcm_8000/pcm_16000/pcm_22050/pcm_24000/pcm_32000/pcm_44100/pcm_48000/ulaw_8000/alaw_8000/opus_48000_32/opus_48000_64/opus_48000_96/opus_48000_128/opus_48000_192)=mp3_44100_192 # Output format of the generated audio. Formatted as codec_sample_rate_bitrate. So an mp3 with 22.05kHz sample rate at 32kbs is represented as mp3_22050_32. MP3 with 192kbps bitrate requires you to be subscribed to Creator tier or above. PCM with 44.1kHz sample rate requires you to be subscribed to Pro tier or above. Note that the μ-law format (sometimes written mu-law, often approximated as u-law) is commonly used for Twilio audio inputs., text: any # Text to generate, text length has to be between 100 and 1000., auto_generate_text: bool=false # Whether to automatically generate a text suitable for the voice description., loudness: num=0.5 # Controls the volume level of the generated voice. -1 is quietest, 1 is loudest, 0 corresponds to roughly -24 LUFS., quality: num=0.9 # Higher quality results in better voice output but less variety., seed: any # Random number that controls the voice generation. Same seed with same inputs produces same voice., guidance_scale: num=5 # Controls how closely the AI follows the prompt. Lower numbers give the AI more freedom to be creative, while higher numbers force it to stick more to the prompt. High numbers can cause voice to sound artificial or robotic. We recommend to use longer, more detailed prompts at lower Guidance Scale., should_enhance: bool=false # Whether to enhance the voice description using AI to add more detail and improve voice generation quality. When enabled, the system will automatically expand simple prompts into more detailed voice descriptions. Defaults to False}\n@returns(200) {previews: [map], text: str} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint POST /v1/text-to-voice\n@desc Create A New Voice From Voice Preview\n@required {voice_name: str # Name to use for the created voice., voice_description: str # Description to use for the created voice., generated_voice_id: str # The generated_voice_id to create, call POST /v1/text-to-voice/create-previews and fetch the generated_voice_id from the response header if don't have one yet.}\n@optional {labels: any # Optional, metadata to add to the created voice. Defaults to None., played_not_selected_voice_ids: any # List of voice ids that the user has played but not selected. Used for RLHF.}\n@returns(200) {voice_id: str, name: str, samples: any, category: str, fine_tuning: any, labels: map, description: any, preview_url: any, available_for_tiers: [str], settings: any, sharing: any, high_quality_base_model_ids: [str], verified_languages: any, collection_ids: any, safety_control: any, voice_verification: any, permission_on_resource: any, is_owner: any, is_legacy: bool, is_mixed: bool, favorited_at_unix: any, created_at_unix: any, is_bookmarked: any} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint POST /v1/text-to-voice/design\n@desc Design A Voice.\n@required {voice_description: str # Description to use for the created voice.}\n@optional {output_format: str(mp3_22050_32/mp3_24000_48/mp3_44100_32/mp3_44100_64/mp3_44100_96/mp3_44100_128/mp3_44100_192/pcm_8000/pcm_16000/pcm_22050/pcm_24000/pcm_32000/pcm_44100/pcm_48000/ulaw_8000/alaw_8000/opus_48000_32/opus_48000_64/opus_48000_96/opus_48000_128/opus_48000_192)=mp3_44100_192 # Output format of the generated audio. Formatted as codec_sample_rate_bitrate. So an mp3 with 22.05kHz sample rate at 32kbs is represented as mp3_22050_32. MP3 with 192kbps bitrate requires you to be subscribed to Creator tier or above. PCM with 44.1kHz sample rate requires you to be subscribed to Pro tier or above. Note that the μ-law format (sometimes written mu-law, often approximated as u-law) is commonly used for Twilio audio inputs., model_id: str(eleven_multilingual_ttv_v2/eleven_ttv_v3)=eleven_multilingual_ttv_v2 # Model to use for the voice generation. Possible values: eleven_multilingual_ttv_v2, eleven_ttv_v3., text: any # Text to generate, text length has to be between 100 and 1000., auto_generate_text: bool=false # Whether to automatically generate a text suitable for the voice description., loudness: num=0.5 # Controls the volume level of the generated voice. -1 is quietest, 1 is loudest, 0 corresponds to roughly -24 LUFS., seed: any # Random number that controls the voice generation. Same seed with same inputs produces same voice., guidance_scale: num=5 # Controls how closely the AI follows the prompt. Lower numbers give the AI more freedom to be creative, while higher numbers force it to stick more to the prompt. High numbers can cause voice to sound artificial or robotic. We recommend to use longer, more detailed prompts at lower Guidance Scale., stream_previews: bool=false # Determines whether the Text to Voice previews should be included in the response. If true, only the generated IDs will be returned which can then be streamed via the /v1/text-to-voice/:generated_voice_id/stream endpoint., should_enhance: bool=false # Whether to enhance the voice description using AI to add more detail and improve voice generation quality. When enabled, the system will automatically expand simple prompts into more detailed voice descriptions. Defaults to False, remixing_session_id: any # The remixing session id., remixing_session_iteration_id: any # The id of the remixing session iteration where these generations should be attached to. If not provided, a new iteration will be created., quality: any # Higher quality results in better voice output but less variety., reference_audio_base64: any # Reference audio to use for the voice generation. The audio should be base64 encoded. Only supported when using the  eleven_ttv_v3 model., prompt_strength: any # Controls the balance of prompt versus reference audio when generating voice samples. 0 means almost no prompt influence, 1 means almost no reference audio influence. Only supported when using the eleven_ttv_v3 model.}\n@returns(200) {previews: [map], text: str} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint POST /v1/text-to-voice/{voice_id}/remix\n@desc Remix A Voice.\n@required {voice_id: str # Voice ID to be used, you can use https://api.elevenlabs.io/v1/voices to list all the available voices., voice_description: str # Description of the changes to make to the voice.}\n@optional {output_format: str(mp3_22050_32/mp3_24000_48/mp3_44100_32/mp3_44100_64/mp3_44100_96/mp3_44100_128/mp3_44100_192/pcm_8000/pcm_16000/pcm_22050/pcm_24000/pcm_32000/pcm_44100/pcm_48000/ulaw_8000/alaw_8000/opus_48000_32/opus_48000_64/opus_48000_96/opus_48000_128/opus_48000_192)=mp3_44100_192 # Output format of the generated audio. Formatted as codec_sample_rate_bitrate. So an mp3 with 22.05kHz sample rate at 32kbs is represented as mp3_22050_32. MP3 with 192kbps bitrate requires you to be subscribed to Creator tier or above. PCM with 44.1kHz sample rate requires you to be subscribed to Pro tier or above. Note that the μ-law format (sometimes written mu-law, often approximated as u-law) is commonly used for Twilio audio inputs., text: any # Text to generate, text length has to be between 100 and 1000., auto_generate_text: bool=false # Whether to automatically generate a text suitable for the voice description., loudness: num=0.5 # Controls the volume level of the generated voice. -1 is quietest, 1 is loudest, 0 corresponds to roughly -24 LUFS., seed: any # Random number that controls the voice generation. Same seed with same inputs produces same voice., guidance_scale: num=2 # Controls how closely the AI follows the prompt. Lower numbers give the AI more freedom to be creative, while higher numbers force it to stick more to the prompt. High numbers can cause voice to sound artificial or robotic. We recommend to use longer, more detailed prompts at lower Guidance Scale., stream_previews: bool=false # Determines whether the Text to Voice previews should be included in the response. If true, only the generated IDs will be returned which can then be streamed via the /v1/text-to-voice/:generated_voice_id/stream endpoint., remixing_session_id: any # The remixing session id., remixing_session_iteration_id: any # The id of the remixing session iteration where these generations should be attached to. If not provided, a new iteration will be created., prompt_strength: any # Controls the balance of prompt versus reference audio when generating voice samples. 0 means almost no prompt influence, 1 means almost no reference audio influence. Only supported when using the eleven_ttv_v3 model.}\n@returns(200) {previews: [map], text: str} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint GET /v1/text-to-voice/{generated_voice_id}/stream\n@desc Text To Voice Preview Streaming\n@required {generated_voice_id: str # The generated_voice_id to stream.}\n@returns(200) Streaming audio data\n@errors {422: Validation Error}\n\n@endgroup\n\n@group user\n@endpoint GET /v1/user/subscription\n@desc Get User Subscription Info\n@returns(200) {tier: str, character_count: int, character_limit: int, max_character_limit_extension: any, can_extend_character_limit: bool, allowed_to_extend_character_limit: bool, next_character_count_reset_unix: any, voice_slots_used: int, professional_voice_slots_used: int, voice_limit: int, max_voice_add_edits: any, voice_add_edit_counter: int, professional_voice_limit: int, can_extend_voice_limit: bool, can_use_instant_voice_cloning: bool, can_use_professional_voice_cloning: bool, currency: any, status: str, billing_period: any, character_refresh_period: any, next_invoice: any, open_invoices: [map], has_open_invoices: bool, pending_change: any, has_used_starter_coupon_on_account: bool, has_used_creator_coupon_on_account: bool} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint GET /v1/user\n@desc Get User Info\n@returns(200) {user_id: str, subscription: map{tier: str, character_count: int, character_limit: int, max_character_limit_extension: any, can_extend_character_limit: bool, allowed_to_extend_character_limit: bool, next_character_count_reset_unix: any, voice_slots_used: int, professional_voice_slots_used: int, voice_limit: int, max_voice_add_edits: any, voice_add_edit_counter: int, professional_voice_limit: int, can_extend_voice_limit: bool, can_use_instant_voice_cloning: bool, can_use_professional_voice_cloning: bool, currency: any, status: str, billing_period: any, character_refresh_period: any}, is_new_user: bool, xi_api_key: any, can_use_delayed_payment_methods: bool, is_onboarding_completed: bool, is_onboarding_checklist_completed: bool, show_compliance_terms: bool, first_name: any, is_api_key_hashed: bool, xi_api_key_preview: any, referral_link_code: any, partnerstack_partner_default_link: any, created_at: int, seat_type: str} # Successful Response\n@errors {422: Validation Error}\n\n@endgroup\n\n@group voices\n@endpoint GET /v1/voices\n@desc List Voices\n@optional {show_legacy: any=false # If set to true, legacy premade voices will be included in responses from /v1/voices}\n@returns(200) {voices: [map]} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint GET /v2/voices\n@desc Get Voices V2\n@optional {next_page_token: any # The next page token to use for pagination. Returned from the previous request. Use this in combination with the has_more flag for reliable pagination., page_size: int=10 # How many voices to return at maximum. Can not exceed 100, defaults to 10. Page 0 may include more voices due to default voices being included., search: any # Search term to filter voices by. Searches in name, description, labels, category., sort: any # Which field to sort by, one of 'created_at_unix' or 'name'. 'created_at_unix' may not be available for older voices., sort_direction: any # Which direction to sort the voices in. 'asc' or 'desc'., voice_type: any # Type of the voice to filter by. One of 'personal', 'community', 'default', 'workspace', 'non-default', 'saved'. 'non-default' is equal to all but 'default'. 'saved' is equal to non-default, but includes default voices if they have been added to a collection., category: any # Category of the voice to filter by. One of 'premade', 'cloned', 'generated', 'professional', fine_tuning_state: any # State of the voice's fine tuning to filter by. Applicable only to professional voices clones. One of 'draft', 'not_verified', 'not_started', 'queued', 'fine_tuning', 'fine_tuned', 'failed', 'delayed', collection_id: any # Collection ID to filter voices by., include_total_count: bool=true # Whether to include the total count of voices found in the response. NOTE: The total_count value is a live snapshot and may change between requests as users create, modify, or delete voices. For pagination, rely on the has_more flag instead. Only enable this when you actually need the total count (e.g., for display purposes), as it incurs a performance cost., voice_ids: any # Voice IDs to lookup by. Maximum 100 voice IDs.}\n@returns(200) {voices: [map], has_more: bool, total_count: int, next_page_token: any} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint GET /v1/voices/settings/default\n@desc Get Default Voice Settings.\n@returns(200) {stability: any, use_speaker_boost: any, similarity_boost: any, style: any, speed: any} # Successful Response\n\n@endpoint GET /v1/voices/{voice_id}/settings\n@desc Get Voice Settings\n@required {voice_id: str # Voice ID to be used, you can use https://api.elevenlabs.io/v1/voices to list all the available voices.}\n@returns(200) {stability: any, use_speaker_boost: any, similarity_boost: any, style: any, speed: any} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint GET /v1/voices/{voice_id}\n@desc Get Voice\n@required {voice_id: str # Voice ID to be used, you can use https://api.elevenlabs.io/v1/voices to list all the available voices.}\n@optional {with_settings: bool=true # This parameter is now deprecated. It is ignored and will be removed in a future version.}\n@returns(200) {voice_id: str, name: str, samples: any, category: str, fine_tuning: any, labels: map, description: any, preview_url: any, available_for_tiers: [str], settings: any, sharing: any, high_quality_base_model_ids: [str], verified_languages: any, collection_ids: any, safety_control: any, voice_verification: any, permission_on_resource: any, is_owner: any, is_legacy: bool, is_mixed: bool, favorited_at_unix: any, created_at_unix: any, is_bookmarked: any} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint DELETE /v1/voices/{voice_id}\n@desc Delete Voice\n@required {voice_id: str # Voice ID to be used, you can use https://api.elevenlabs.io/v1/voices to list all the available voices.}\n@returns(200) {status: str} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint POST /v1/voices/{voice_id}/settings/edit\n@desc Edit Voice Settings\n@required {voice_id: str # Voice ID to be used, you can use https://api.elevenlabs.io/v1/voices to list all the available voices.}\n@optional {stability: any=0.5 # Determines how stable the voice is and the randomness between each generation. Lower values introduce broader emotional range for the voice. Higher values can result in a monotonous voice with limited emotion., use_speaker_boost: any=true # This setting boosts the similarity to the original speaker. Using this setting requires a slightly higher computational load, which in turn increases latency., similarity_boost: any=0.75 # Determines how closely the AI should adhere to the original voice when attempting to replicate it., style: any=0 # Determines the style exaggeration of the voice. This setting attempts to amplify the style of the original speaker. It does consume additional computational resources and might increase latency if set to anything other than 0., speed: any=1 # Adjusts the speed of the voice. A value of 1.0 is the default speed, while values less than 1.0 slow down the speech, and values greater than 1.0 speed it up.}\n@returns(200) {status: str} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint POST /v1/voices/add\n@desc Add Voice\n@returns(200) {voice_id: str, requires_verification: bool} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint POST /v1/voices/{voice_id}/edit\n@desc Edit Voice\n@required {voice_id: str # Voice ID to be used, you can use https://api.elevenlabs.io/v1/voices to list all the available voices.}\n@returns(200) {status: str} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint POST /v1/voices/add/{public_user_id}/{voice_id}\n@desc Add Shared Voice\n@required {public_user_id: str # Public user ID used to publicly identify ElevenLabs users., voice_id: str # Voice ID to be used, you can use https://api.elevenlabs.io/v1/voices to list all the available voices., new_name: str # The name that identifies this voice. This will be displayed in the dropdown of the website.}\n@optional {bookmarked: bool=true}\n@returns(200) {voice_id: str} # Successful Response\n@errors {422: Validation Error}\n\n@endgroup\n\n@group studio\n@endpoint POST /v1/studio/podcasts\n@desc Create Podcast\n@required {model_id: str # The ID of the model to be used for this Studio project, you can query GET /v1/models to list all available models., mode: any # The type of podcast to generate. Can be 'conversation', an interaction between two voices, or 'bulletin', a monologue., source: any # The source content for the Podcast.}\n@optional {safety-identifier: any # Used for moderation. Your workspace must be allowlisted to use this feature., quality_preset: str(standard/high/highest/ultra/ultra_lossless)=standard # Output quality of the generated audio. Must be one of: 'standard' - standard output format, 128kbps with 44.1kHz sample rate. 'high' - high quality output format, 192kbps with 44.1kHz sample rate and major improvements on our side. 'ultra' - ultra quality output format, 192kbps with 44.1kHz sample rate and highest improvements on our side. 'ultra_lossless' - ultra quality output format, 705.6kbps with 44.1kHz sample rate and highest improvements on our side in a fully lossless format., duration_scale: str(short/default/long)=default # Duration of the generated podcast. Must be one of: short - produces podcasts shorter than 3 minutes. default - produces podcasts roughly between 3-7 minutes. long - produces podcasts longer than 7 minutes., language: any # An optional language of the Studio project. Two-letter language code (ISO 639-1)., intro: any # The intro text that will always be added to the beginning of the podcast., outro: any # The outro text that will always be added to the end of the podcast., instructions_prompt: any # Additional instructions prompt for the podcast generation used to adjust the podcast's style and tone., highlights: any # A brief summary or highlights of the Studio project's content, providing key points or themes. This should be between 10 and 70 characters., callback_url: any # A url that will be called by our service when the Studio project is converted. Request will contain a json blob containing the status of the conversion     Messages:     1. When project was converted successfully:     {       type: \"project_conversion_status\",       event_timestamp: 1234567890,       data: {         request_id: \"1234567890\",         project_id: \"21m00Tcm4TlvDq8ikWAM\",         conversion_status: \"success\",         project_snapshot_id: \"22m00Tcm4TlvDq8ikMAT\", error_details: None,       }     }     2. When project conversion failed:     {       type: \"project_conversion_status\",       event_timestamp: 1234567890,       data: {         request_id: \"1234567890\",         project_id: \"21m00Tcm4TlvDq8ikWAM\",         conversion_status: \"error\", project_snapshot_id: None,         error_details: \"Error details if conversion failed\"       }     }      3. When chapter was converted successfully:     {       type: \"chapter_conversion_status\",       event_timestamp: 1234567890,       data: {         request_id: \"1234567890\",         project_id: \"21m00Tcm4TlvDq8ikWAM\",         chapter_id: \"22m00Tcm4TlvDq8ikMAT\",         conversion_status: \"success\",         chapter_snapshot_id: \"23m00Tcm4TlvDq8ikMAV\", error_details: None,       }     }     4. When chapter conversion failed:     {       type: \"chapter_conversion_status\",       event_timestamp: 1234567890,       data: {         request_id: \"1234567890\",         project_id: \"21m00Tcm4TlvDq8ikWAM\",         chapter_id: \"22m00Tcm4TlvDq8ikMAT\",         conversion_status: \"error\", chapter_snapshot_id: None,         error_details: \"Error details if conversion failed\"       }     }, apply_text_normalization: any # This parameter controls text normalization with four modes: 'auto', 'on', 'apply_english' and 'off'.     When set to 'auto', the system will automatically decide whether to apply text normalization     (e.g., spelling out numbers). With 'on', text normalization will always be applied, while     with 'off', it will be skipped. 'apply_english' is the same as 'on' but will assume that text is in English.}\n@returns(200) {project: map{project_id: str, name: str, create_date_unix: int, created_by_user_id: any, default_title_voice_id: str, default_paragraph_voice_id: str, default_model_id: str, last_conversion_date_unix: any, can_be_downloaded: bool, title: any, author: any, description: any, genres: any, cover_image_url: any, target_audience: any, language: any, content_type: any, original_publication_date: any, mature_content: any, isbn_number: any, volume_normalization: bool, state: str, access_level: str, fiction: any, quality_check_on: bool, quality_check_on_when_bulk_convert: bool, creation_meta: any, source_type: any, chapters_enabled: any, captions_enabled: any, caption_style: any, caption_style_template_overrides: any, public_share_id: any, aspect_ratio: any, agent_settings: any}} # Successful Response\n@errors {422: Validation Error}\n\n@endgroup\n\n@group music\n@endpoint POST /v1/music/video-to-music\n@desc Video To Music\n@optional {output_format: str(mp3_22050_32/mp3_24000_48/mp3_44100_32/mp3_44100_64/mp3_44100_96/mp3_44100_128/mp3_44100_192/pcm_8000/pcm_16000/pcm_22050/pcm_24000/pcm_32000/pcm_44100/pcm_48000/ulaw_8000/alaw_8000/opus_48000_32/opus_48000_64/opus_48000_96/opus_48000_128/opus_48000_192)=mp3_44100_128 # Output format of the generated audio. Formatted as codec_sample_rate_bitrate. So an mp3 with 22.05kHz sample rate at 32kbs is represented as mp3_22050_32. MP3 with 192kbps bitrate requires you to be subscribed to Creator tier or above. PCM with 44.1kHz sample rate requires you to be subscribed to Pro tier or above. Note that the μ-law format (sometimes written mu-law, often approximated as u-law) is commonly used for Twilio audio inputs.}\n@returns(200) Generated audio file matching the video. Content-Type and file extension depend on the output_format parameter (default mp3).\n@errors {403: Subscription required., 422: Validation error (e.g. invalid or missing videos).}\n\n@endgroup\n\n@group studio\n@endpoint POST /v1/studio/projects/{project_id}/pronunciation-dictionaries\n@desc Create Pronunciation Dictionaries\n@required {project_id: str # The ID of the Studio project., pronunciation_dictionary_locators: [map{pronunciation_dictionary_id!: str, version_id!: any}] # A list of pronunciation dictionary locators (pronunciation_dictionary_id, version_id) encoded as a list of JSON strings for pronunciation dictionaries to be applied to the text. A list of json encoded strings is required as adding projects may occur through formData as opposed to jsonBody. To specify multiple dictionaries use multiple --form lines in your curl, such as --form 'pronunciation_dictionary_locators=\"{\\\"pronunciation_dictionary_id\\\":\\\"Vmd4Zor6fplcA7WrINey\\\",\\\"version_id\\\":\\\"hRPaxjlTdR7wFMhV4w0b\\\"}\"' --form 'pronunciation_dictionary_locators=\"{\\\"pronunciation_dictionary_id\\\":\\\"JzWtcGQMJ6bnlWwyMo7e\\\",\\\"version_id\\\":\\\"lbmwxiLu4q6txYxgdZqn\\\"}\"'.}\n@optional {invalidate_affected_text: bool=true # This will automatically mark text in this project for reconversion when the new dictionary applies or the old one no longer does.}\n@returns(200) {status: str} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint GET /v1/studio/projects\n@desc List Studio Projects\n@returns(200) {projects: [map]} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint POST /v1/studio/projects\n@desc Create Studio Project\n@returns(200) {project: map{project_id: str, name: str, create_date_unix: int, created_by_user_id: any, default_title_voice_id: str, default_paragraph_voice_id: str, default_model_id: str, last_conversion_date_unix: any, can_be_downloaded: bool, title: any, author: any, description: any, genres: any, cover_image_url: any, target_audience: any, language: any, content_type: any, original_publication_date: any, mature_content: any, isbn_number: any, volume_normalization: bool, state: str, access_level: str, fiction: any, quality_check_on: bool, quality_check_on_when_bulk_convert: bool, creation_meta: any, source_type: any, chapters_enabled: any, captions_enabled: any, caption_style: any, caption_style_template_overrides: any, public_share_id: any, aspect_ratio: any, agent_settings: any}} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint POST /v1/studio/projects/{project_id}\n@desc Update Studio Project\n@required {project_id: str # The ID of the Studio project., name: str # The name of the Studio project, used for identification only., default_title_voice_id: str # The voice_id that corresponds to the default voice used for new titles., default_paragraph_voice_id: str # The voice_id that corresponds to the default voice used for new paragraphs.}\n@optional {title: any # An optional name of the author of the Studio project, this will be added as metadata to the mp3 file on Studio project or chapter download., author: any # An optional name of the author of the Studio project, this will be added as metadata to the mp3 file on Studio project or chapter download., isbn_number: any # An optional ISBN number of the Studio project you want to create, this will be added as metadata to the mp3 file on Studio project or chapter download., volume_normalization: bool=false # When the Studio project is downloaded, should the returned audio have postprocessing in order to make it compliant with audiobook normalized volume requirements}\n@returns(200) {project: map{project_id: str, name: str, create_date_unix: int, created_by_user_id: any, default_title_voice_id: str, default_paragraph_voice_id: str, default_model_id: str, last_conversion_date_unix: any, can_be_downloaded: bool, title: any, author: any, description: any, genres: any, cover_image_url: any, target_audience: any, language: any, content_type: any, original_publication_date: any, mature_content: any, isbn_number: any, volume_normalization: bool, state: str, access_level: str, fiction: any, quality_check_on: bool, quality_check_on_when_bulk_convert: bool, creation_meta: any, source_type: any, chapters_enabled: any, captions_enabled: any, caption_style: any, caption_style_template_overrides: any, public_share_id: any, aspect_ratio: any, agent_settings: any}} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint GET /v1/studio/projects/{project_id}\n@desc Get Studio Project\n@required {project_id: str # The ID of the Studio project.}\n@optional {share_id: any # The share ID of the project}\n@returns(200) {project_id: str, name: str, create_date_unix: int, created_by_user_id: any, default_title_voice_id: str, default_paragraph_voice_id: str, default_model_id: str, last_conversion_date_unix: any, can_be_downloaded: bool, title: any, author: any, description: any, genres: any, cover_image_url: any, target_audience: any, language: any, content_type: any, original_publication_date: any, mature_content: any, isbn_number: any, volume_normalization: bool, state: str, access_level: str, fiction: any, quality_check_on: bool, quality_check_on_when_bulk_convert: bool, creation_meta: any, source_type: any, chapters_enabled: any, captions_enabled: any, caption_style: any, caption_style_template_overrides: any, public_share_id: any, aspect_ratio: any, agent_settings: any, quality_preset: str, chapters: [map], pronunciation_dictionary_versions: [map], pronunciation_dictionary_locators: [map], apply_text_normalization: str, experimental: map, assets: [any], voices: [map], base_voices: any, publishing_read: any} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint DELETE /v1/studio/projects/{project_id}\n@desc Delete Studio Project\n@required {project_id: str # The ID of the Studio project.}\n@returns(200) {status: str} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint POST /v1/studio/projects/{project_id}/content\n@desc Update Studio Project Content\n@required {project_id: str # The ID of the Studio project.}\n@returns(200) {project: map{project_id: str, name: str, create_date_unix: int, created_by_user_id: any, default_title_voice_id: str, default_paragraph_voice_id: str, default_model_id: str, last_conversion_date_unix: any, can_be_downloaded: bool, title: any, author: any, description: any, genres: any, cover_image_url: any, target_audience: any, language: any, content_type: any, original_publication_date: any, mature_content: any, isbn_number: any, volume_normalization: bool, state: str, access_level: str, fiction: any, quality_check_on: bool, quality_check_on_when_bulk_convert: bool, creation_meta: any, source_type: any, chapters_enabled: any, captions_enabled: any, caption_style: any, caption_style_template_overrides: any, public_share_id: any, aspect_ratio: any, agent_settings: any}} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint POST /v1/studio/projects/{project_id}/convert\n@desc Convert Studio Project\n@required {project_id: str # The ID of the Studio project.}\n@returns(200) {status: str} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint GET /v1/studio/projects/{project_id}/snapshots\n@desc List Studio Project Snapshots\n@required {project_id: str # The ID of the Studio project.}\n@returns(200) {snapshots: [map]} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint GET /v1/studio/projects/{project_id}/snapshots/{project_snapshot_id}\n@desc Get Project Snapshot\n@required {project_id: str # The ID of the Studio project., project_snapshot_id: str # The ID of the Studio project snapshot.}\n@returns(200) {project_snapshot_id: str, project_id: str, created_at_unix: int, name: str, audio_upload: any, zip_upload: any, character_alignments: [map], audio_duration_secs: num} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint POST /v1/studio/projects/{project_id}/snapshots/{project_snapshot_id}/stream\n@desc Stream Studio Project Audio\n@required {project_id: str # The ID of the Studio project., project_snapshot_id: str # The ID of the Studio project snapshot.}\n@optional {convert_to_mpeg: bool=false # Whether to convert the audio to mpeg format.}\n@returns(200) Successful Response\n@errors {422: Validation Error}\n\n@endpoint POST /v1/studio/projects/{project_id}/snapshots/{project_snapshot_id}/archive\n@desc Stream Archive With Studio Project Audio\n@required {project_id: str # The ID of the Studio project., project_snapshot_id: str # The ID of the Studio project snapshot.}\n@returns(200) Streaming archive data\n@errors {422: Validation Error}\n\n@endpoint GET /v1/studio/projects/{project_id}/chapters\n@desc List Chapters\n@required {project_id: str # The ID of the Studio project.}\n@returns(200) {chapters: [map]} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint POST /v1/studio/projects/{project_id}/chapters\n@desc Create Chapter\n@required {project_id: str # The ID of the Studio project., name: str # The name of the chapter, used for identification only.}\n@optional {from_url: any # An optional URL from which we will extract content to initialize the Studio project. If this is set, 'from_url' and 'from_content' must be null. If neither 'from_url', 'from_document', 'from_content' are provided we will initialize the Studio project as blank.}\n@returns(200) {chapter: map{chapter_id: str, name: str, last_conversion_date_unix: any, conversion_progress: any, can_be_downloaded: bool, state: str, has_video: any, has_visual_content: any, voice_ids: any, statistics: any, last_conversion_error: any, content: map{blocks: [map]}}} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint GET /v1/studio/projects/{project_id}/chapters/{chapter_id}\n@desc Get Chapter\n@required {project_id: str # The ID of the Studio project., chapter_id: str # The ID of the chapter.}\n@returns(200) {chapter_id: str, name: str, last_conversion_date_unix: any, conversion_progress: any, can_be_downloaded: bool, state: str, has_video: any, has_visual_content: any, voice_ids: any, statistics: any, last_conversion_error: any, content: map{blocks: [map]}} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint POST /v1/studio/projects/{project_id}/chapters/{chapter_id}\n@desc Update Chapter\n@required {project_id: str # The ID of the Studio project., chapter_id: str # The ID of the chapter.}\n@optional {name: any # The name of the chapter, used for identification only., content: any # The chapter content to use.}\n@returns(200) {chapter: map{chapter_id: str, name: str, last_conversion_date_unix: any, conversion_progress: any, can_be_downloaded: bool, state: str, has_video: any, has_visual_content: any, voice_ids: any, statistics: any, last_conversion_error: any, content: map{blocks: [map]}}} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint DELETE /v1/studio/projects/{project_id}/chapters/{chapter_id}\n@desc Delete Chapter\n@required {project_id: str # The ID of the Studio project., chapter_id: str # The ID of the chapter.}\n@returns(200) {status: str} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint POST /v1/studio/projects/{project_id}/chapters/{chapter_id}/convert\n@desc Convert Chapter\n@required {project_id: str # The ID of the Studio project., chapter_id: str # The ID of the chapter.}\n@returns(200) {status: str} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint GET /v1/studio/projects/{project_id}/chapters/{chapter_id}/snapshots\n@desc List Chapter Snapshots\n@required {project_id: str # The ID of the Studio project., chapter_id: str # The ID of the chapter.}\n@returns(200) {snapshots: [map]} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint GET /v1/studio/projects/{project_id}/chapters/{chapter_id}/snapshots/{chapter_snapshot_id}\n@desc Get Chapter Snapshot\n@required {project_id: str # The ID of the Studio project., chapter_id: str # The ID of the chapter., chapter_snapshot_id: str # The ID of the chapter snapshot.}\n@returns(200) {chapter_snapshot_id: str, project_id: str, chapter_id: str, created_at_unix: int, name: str, character_alignments: [map]} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint POST /v1/studio/projects/{project_id}/chapters/{chapter_id}/snapshots/{chapter_snapshot_id}/stream\n@desc Stream Chapter Audio\n@required {project_id: str # The ID of the Studio project., chapter_id: str # The ID of the chapter., chapter_snapshot_id: str # The ID of the chapter snapshot.}\n@optional {convert_to_mpeg: bool=false # Whether to convert the audio to mpeg format.}\n@returns(200) Streaming audio data\n@errors {422: Validation Error}\n\n@endpoint GET /v1/studio/projects/{project_id}/muted-tracks\n@desc Get Project Muted Tracks\n@required {project_id: str # The ID of the Studio project.}\n@returns(200) {chapter_ids: [str]} # Successful Response\n@errors {422: Validation Error}\n\n@endgroup\n\n@group dubbing\n@endpoint GET /v1/dubbing/resource/{dubbing_id}\n@desc Get The Dubbing Resource For An Id.\n@required {dubbing_id: str # ID of the dubbing project.}\n@returns(200) {id: str, version: int, source_language: str, target_languages: [str], input: map{src: str, content_type: str, bucket_name: str, random_path_slug: str, duration_secs: num, is_audio: bool, url: str}, background: any, foreground: any, speaker_tracks: map, speaker_segments: map, renders: map} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint POST /v1/dubbing/resource/{dubbing_id}/language\n@desc Add A Language To The Resource\n@required {dubbing_id: str # ID of the dubbing project., language: any # The Target language.}\n@returns(201) {version: int} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint POST /v1/dubbing/resource/{dubbing_id}/speaker/{speaker_id}/segment\n@desc Create A Segment For The Speaker\n@required {dubbing_id: str # ID of the dubbing project., speaker_id: str # ID of the speaker., start_time: num, end_time: num}\n@optional {text: any, translations: any}\n@returns(201) {version: int, new_segment: str} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint PATCH /v1/dubbing/resource/{dubbing_id}/segment/{segment_id}/{language}\n@desc Modify A Single Segment\n@required {dubbing_id: str # ID of the dubbing project., segment_id: str # ID of the segment, language: str # ID of the language.}\n@optional {start_time: any, end_time: any, text: any}\n@returns(200) {version: int} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint POST /v1/dubbing/resource/{dubbing_id}/migrate-segments\n@desc Move Segments Between Speakers\n@required {dubbing_id: str # ID of the dubbing project., segment_ids: [str], speaker_id: str}\n@returns(200) {version: int} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint DELETE /v1/dubbing/resource/{dubbing_id}/segment/{segment_id}\n@desc Deletes A Single Segment\n@required {dubbing_id: str # ID of the dubbing project., segment_id: str # ID of the segment}\n@returns(200) {version: int} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint POST /v1/dubbing/resource/{dubbing_id}/transcribe\n@desc Transcribes Segments\n@required {dubbing_id: str # ID of the dubbing project., segments: [str] # Transcribe this specific list of segments.}\n@returns(200) {version: int} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint POST /v1/dubbing/resource/{dubbing_id}/translate\n@desc Translates All Or Some Segments And Languages\n@required {dubbing_id: str # ID of the dubbing project., segments: [str] # Translate only this list of segments., languages: any # Translate only these languages for each segment.}\n@returns(200) {version: int} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint POST /v1/dubbing/resource/{dubbing_id}/dub\n@desc Dubs All Or Some Segments And Languages\n@required {dubbing_id: str # ID of the dubbing project., segments: [str] # Dub only this list of segments., languages: any # Dub only these languages for each segment.}\n@returns(200) {version: int} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint PATCH /v1/dubbing/resource/{dubbing_id}/speaker/{speaker_id}\n@desc Update Metadata For A Speaker\n@required {dubbing_id: str # ID of the dubbing project., speaker_id: str # ID of the speaker.}\n@optional {speaker_name: any # Name to attribute to this speaker., voice_id: any # Either the identifier of a voice from the ElevenLabs voice library, or one of ['track-clone', 'clip-clone']., voice_stability: any # For models that support it, the voice similarity value to use. This will default to 0.65, with a valid range of [0.0, 1.0]., voice_similarity: any # For models that support it, the voice similarity value to use. This will default to 1.0, with a valid range of [0.0, 1.0]., voice_style: any # For models that support it, the voice style value to use. This will default to 1.0, with a valid range of [0.0, 1.0]., languages: any # Languages to apply these changes to. If empty, will apply to all languages.}\n@returns(200) {version: int} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint POST /v1/dubbing/resource/{dubbing_id}/speaker\n@desc Create A New Speaker\n@required {dubbing_id: str # ID of the dubbing project.}\n@optional {speaker_name: any # Name to attribute to this speaker., voice_id: any # Either the identifier of a voice from the ElevenLabs voice library, or one of ['track-clone', 'clip-clone']., voice_stability: any # For models that support it, the voice similarity value to use. This will default to 0.65, with a valid range of [0.0, 1.0]., voice_similarity: any # For models that support it, the voice similarity value to use. This will default to 1.0, with a valid range of [0.0, 1.0]., voice_style: any # For models that support it, the voice style value to use. This will default to 1.0, with a valid range of [0.0, 1.0].}\n@returns(200) {version: int, speaker_id: str} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint GET /v1/dubbing/resource/{dubbing_id}/speaker/{speaker_id}/similar-voices\n@desc Search The Elevenlabs Library For Voices Similar To A Speaker.\n@required {dubbing_id: str # ID of the dubbing project., speaker_id: str # ID of the speaker.}\n@returns(200) {voices: [map]} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint POST /v1/dubbing/resource/{dubbing_id}/render/{language}\n@desc Render Audio Or Video For The Given Language\n@required {dubbing_id: str # ID of the dubbing project., language: str # The target language code to render, eg. 'es'. To render the source track use 'original'., render_type: str(mp4/aac/mp3/wav/aaf/tracks_zip/clips_zip)}\n@optional {normalize_volume: any=false # Whether to normalize the volume of the rendered audio.}\n@returns(200) {version: int, render_id: str} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint GET /v1/dubbing\n@desc List Dubs\n@optional {cursor: any # Used for fetching next page. Cursor is returned in the response., page_size: int=100 # How many dubs to return at maximum. Can not exceed 200, defaults to 100., dubbing_status: str(dubbing/dubbed/failed) # What state the dub is currently in., filter_by_creator: str(personal/others/all)=all # Filters who created the resources being listed, whether it was the user running the request or someone else that shared the resource with them., order_by: str=created_at # The field to use for ordering results from this query., order_direction: str(DESCENDING/ASCENDING)=DESCENDING # The order direction to use for results from this query.}\n@returns(200) {dubs: [map], next_cursor: any, has_more: bool} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint POST /v1/dubbing\n@desc Dub A Video Or An Audio File\n@returns(200) {dubbing_id: str, expected_duration_sec: num} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint GET /v1/dubbing/{dubbing_id}\n@desc Get Dubbing\n@required {dubbing_id: str # ID of the dubbing project.}\n@returns(200) {dubbing_id: str, name: str, status: str, source_language: any, target_languages: [str], editable: bool, created_at: str(date-time), media_metadata: any, error: any} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint DELETE /v1/dubbing/{dubbing_id}\n@desc Delete Dubbing\n@required {dubbing_id: str # ID of the dubbing project.}\n@returns(200) {status: str} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint GET /v1/dubbing/{dubbing_id}/audio/{language_code}\n@desc Get Dubbed File\n@required {dubbing_id: str # ID of the dubbing project., language_code: str # ID of the language.}\n@returns(200) The dubbed audio or video file\n@errors {403: Permission denied, 404: Dubbing not found, 422: Validation Error, 425: Dubbing not ready}\n\n@endpoint GET /v1/dubbing/{dubbing_id}/transcript/{language_code}\n@desc Get Dubbed Transcript\n@required {dubbing_id: str # ID of the dubbing project., language_code: str # ISO-693 language code to retrieve the transcript for. Use 'source' to fetch the transcript of the original media.}\n@optional {format_type: str(srt/webvtt/json)=srt # Format to return transcript in. For subtitles use either 'srt' or 'webvtt', and for a full transcript use 'json'. The 'json' format is not yet supported for Dubbing Studio.}\n@returns(200) Successful Response\n@errors {403: Anonymous users cannot use this function, 404: Dubbing or transcript not found, 422: Validation Error, 425: Dubbing not ready}\n\n@endpoint GET /v1/dubbing/{dubbing_id}/transcripts/{language_code}/format/{format_type}\n@desc Retrieve A Transcript\n@required {dubbing_id: str # ID of the dubbing project., language_code: str # ISO-693 language code to retrieve the transcript for. Use 'source' to fetch the transcript of the original media., format_type: str(srt/webvtt/json) # Format to return transcript in. For subtitles use either 'srt' or 'webvtt', and for a full transcript use 'json'. The 'json' format is not yet supported for Dubbing Studio.}\n@returns(200) {transcript_format: str, srt: any, webvtt: any, json: any} # Successful Response\n@errors {422: Validation Error}\n\n@endgroup\n\n@group models\n@endpoint GET /v1/models\n@desc Get Models\n@returns(200) Successful Response\n@errors {422: Validation Error}\n\n@endgroup\n\n@group audio-native\n@endpoint POST /v1/audio-native\n@desc Creates Audio Native Enabled Project.\n@returns(200) {project_id: str, converting: bool, html_snippet: str} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint GET /v1/audio-native/{project_id}/settings\n@desc Get Audio Native Project Settings\n@required {project_id: str # The ID of the Studio project.}\n@returns(200) {enabled: bool, snapshot_id: any, settings: any} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint POST /v1/audio-native/{project_id}/content\n@desc Update Audio-Native Project Content\n@required {project_id: str # The ID of the Studio project.}\n@returns(200) {project_id: str, converting: bool, publishing: bool, html_snippet: str} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint POST /v1/audio-native/content\n@desc Update Audio-Native Content From Url\n@required {url: str # URL of the page to extract content from.}\n@optional {author: any # Author used in the player and inserted at the start of the uploaded article. If not provided, the default author set in the Player settings is used., title: any # Title used in the player and inserted at the top of the uploaded article. If not provided, the default title set in the Player settings is used.}\n@returns(200) {project_id: str, converting: bool, publishing: bool, html_snippet: str} # Successful Response\n@errors {422: Validation Error}\n\n@endgroup\n\n@group shared-voices\n@endpoint GET /v1/shared-voices\n@desc Get Voices\n@optional {page_size: int=30 # How many shared voices to return at maximum. Can not exceed 100, defaults to 30., category: any(professional/famous/high_quality) # Voice category used for filtering, gender: any # Gender used for filtering, age: any # Age used for filtering, accent: any # Accent used for filtering, language: any # Language used for filtering, locale: any # Locale used for filtering, search: any # Search term used for filtering, use_cases: any # Use-case used for filtering, descriptives: any # Search term used for filtering, featured: bool=false # Filter featured voices, min_notice_period_days: any # Filter voices with a minimum notice period of the given number of days., include_custom_rates: any # Include/exclude voices with custom rates, include_live_moderated: any # Include/exclude voices that are live moderated, reader_app_enabled: bool=false # Filter voices that are enabled for the reader app, owner_id: any # Filter voices by public owner ID, sort: any # Sort criteria, page: int=0}\n@returns(200) {voices: [map], has_more: bool, last_sort_id: any} # Successful Response\n@errors {422: Validation Error}\n\n@endgroup\n\n@group similar-voices\n@endpoint POST /v1/similar-voices\n@desc Get Similar Library Voices\n@returns(200) {voices: [map], has_more: bool, last_sort_id: any} # Successful Response\n@errors {422: Validation Error}\n\n@endgroup\n\n@group usage\n@endpoint GET /v1/usage/character-stats\n@desc Get Characters Usage Metrics\n@required {start_unix: int # UTC Unix timestamp for the start of the usage window, in milliseconds. To include the first day of the window, the timestamp should be at 00:00:00 of that day., end_unix: int # UTC Unix timestamp for the end of the usage window, in milliseconds. To include the last day of the window, the timestamp should be at 23:59:59 of that day.}\n@optional {include_workspace_metrics: bool=false # Whether or not to include the statistics of the entire workspace., breakdown_type: str=none # How to break down the information. Cannot be \"user\" if include_workspace_metrics is False., aggregation_interval: str=day # How to aggregate usage data over time. Can be \"hour\", \"day\", \"week\", \"month\", or \"cumulative\"., aggregation_bucket_size: any # Aggregation bucket size in seconds. Overrides the aggregation interval., metric: str=credits # Which metric to aggregate.}\n@returns(200) {time: [int], usage: map} # Successful Response\n@errors {422: Validation Error}\n\n@endgroup\n\n@group pronunciation-dictionaries\n@endpoint POST /v1/pronunciation-dictionaries/add-from-file\n@desc Add A Pronunciation Dictionary\n@returns(200) {id: str, name: str, created_by: str, creation_time_unix: int, version_id: str, version_rules_num: int, description: any, permission_on_resource: any} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint POST /v1/pronunciation-dictionaries/add-from-rules\n@desc Add A Pronunciation Dictionary\n@required {rules: [any] # List of pronunciation rules. Rule can be either:     an alias rule: {'string_to_replace': 'a', 'type': 'alias', 'alias': 'b', }     or a phoneme rule: {'string_to_replace': 'a', 'type': 'phoneme', 'phoneme': 'b', 'alphabet': 'ipa' }, name: str # The name of the pronunciation dictionary, used for identification only.}\n@optional {description: any # A description of the pronunciation dictionary, used for identification only., workspace_access: any # Should be one of 'admin', 'editor' or 'viewer'. If not provided, defaults to no access.}\n@returns(200) {id: str, name: str, created_by: str, creation_time_unix: int, version_id: str, version_rules_num: int, description: any, permission_on_resource: any} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint PATCH /v1/pronunciation-dictionaries/{pronunciation_dictionary_id}\n@desc Update Pronunciation Dictionary\n@required {pronunciation_dictionary_id: str # The id of the pronunciation dictionary}\n@optional {archived: bool # Whether to archive the pronunciation dictionary., name: str # The name of the pronunciation dictionary, used for identification only.}\n@returns(200) {id: str, latest_version_id: str, latest_version_rules_num: int, name: str, permission_on_resource: any, created_by: str, creation_time_unix: int, archived_time_unix: any, description: any} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint GET /v1/pronunciation-dictionaries/{pronunciation_dictionary_id}\n@desc Get Metadata For A Pronunciation Dictionary\n@required {pronunciation_dictionary_id: str # The id of the pronunciation dictionary}\n@returns(200) {id: str, latest_version_id: str, latest_version_rules_num: int, name: str, permission_on_resource: any, created_by: str, creation_time_unix: int, archived_time_unix: any, description: any, rules: [any]} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint POST /v1/pronunciation-dictionaries/{pronunciation_dictionary_id}/set-rules\n@desc Set Rules On The Pronunciation Dictionary\n@required {pronunciation_dictionary_id: str # The id of the pronunciation dictionary, rules: [any] # List of pronunciation rules. Rule can be either:     an alias rule: {'string_to_replace': 'a', 'type': 'alias', 'alias': 'b', }     or a phoneme rule: {'string_to_replace': 'a', 'type': 'phoneme', 'phoneme': 'b', 'alphabet': 'ipa' }}\n@returns(200) {id: str, version_id: str, version_rules_num: int} # Successfully set rules on the pronunciation dictionary\n@errors {422: Validation Error}\n\n@endpoint POST /v1/pronunciation-dictionaries/{pronunciation_dictionary_id}/add-rules\n@desc Add Rules To The Pronunciation Dictionary\n@required {pronunciation_dictionary_id: str # The id of the pronunciation dictionary, rules: [any] # List of pronunciation rules. Rule can be either:     an alias rule: {'string_to_replace': 'a', 'type': 'alias', 'alias': 'b', }     or a phoneme rule: {'string_to_replace': 'a', 'type': 'phoneme', 'phoneme': 'b', 'alphabet': 'ipa' }}\n@returns(200) {id: str, version_id: str, version_rules_num: int} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint POST /v1/pronunciation-dictionaries/{pronunciation_dictionary_id}/remove-rules\n@desc Remove Rules From The Pronunciation Dictionary\n@required {pronunciation_dictionary_id: str # The id of the pronunciation dictionary, rule_strings: [str] # List of strings to remove from the pronunciation dictionary.}\n@returns(200) {id: str, version_id: str, version_rules_num: int} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint GET /v1/pronunciation-dictionaries/{dictionary_id}/{version_id}/download\n@desc Get A Pls File With A Pronunciation Dictionary Version Rules\n@required {dictionary_id: str # The id of the pronunciation dictionary, version_id: str # The id of the pronunciation dictionary version}\n@returns(200) The PLS file containing pronunciation dictionary rules\n@errors {422: Validation Error}\n\n@endpoint GET /v1/pronunciation-dictionaries\n@desc Get Pronunciation Dictionaries\n@optional {cursor: any # Used for fetching next page. Cursor is returned in the response., page_size: int=30 # How many pronunciation dictionaries to return at maximum. Can not exceed 100, defaults to 30., sort: any=creation_time_unix # Which field to sort by, one of 'created_at_unix' or 'name'., sort_direction: any=DESCENDING # Which direction to sort the voices in. 'ascending' or 'descending'.}\n@returns(200) {pronunciation_dictionaries: [map], next_cursor: any, has_more: bool} # Successful Response\n@errors {422: Validation Error}\n\n@endgroup\n\n@group service-accounts\n@endpoint GET /v1/service-accounts/{service_account_user_id}/api-keys\n@desc Get Service Account Api Keys Route\n@required {service_account_user_id: str}\n@returns(200) {api-keys: [map]} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint POST /v1/service-accounts/{service_account_user_id}/api-keys\n@desc Create Service Account Api Key\n@required {service_account_user_id: str, name: str, permissions: any # The permissions of the XI API.}\n@optional {character_limit: any # The character limit of the XI API key. If provided this will limit the usage of this api key to n characters per month where n is the chosen value. Requests that incur charges will fail after reaching this monthly limit.}\n@returns(200) {xi-api-key: str, key_id: str} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint PATCH /v1/service-accounts/{service_account_user_id}/api-keys/{api_key_id}\n@desc Edit Service Account Api Key\n@required {service_account_user_id: str, api_key_id: str, is_enabled: bool # Whether to enable or disable the API key., name: str # The name of the XI API key to use (used for identification purposes only)., permissions: any # The permissions of the XI API.}\n@optional {character_limit: any # The character limit of the XI API key. If provided this will limit the usage of this api key to n characters per month where n is the chosen value. Requests that incur charges will fail after reaching this monthly limit.}\n@returns(200) Successful Response\n@errors {422: Validation Error}\n\n@endpoint DELETE /v1/service-accounts/{service_account_user_id}/api-keys/{api_key_id}\n@desc Delete Service Account Api Key\n@required {service_account_user_id: str, api_key_id: str}\n@returns(200) Successful Response\n@errors {422: Validation Error}\n\n@endgroup\n\n@group workspace\n@endpoint POST /v1/workspace/auth-connections\n@desc Create Workspace Auth Connection\n@returns(200) Successful Response\n@errors {422: Validation Error}\n\n@endpoint GET /v1/workspace/auth-connections\n@desc Get Workspace Auth Connections\n@returns(200) {auth_connections: [any]} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint DELETE /v1/workspace/auth-connections/{auth_connection_id}\n@desc Delete Workspace Auth Connection\n@required {auth_connection_id: str}\n@returns(200) Successful Response\n@errors {422: Validation Error}\n\n@endgroup\n\n@group service-accounts\n@endpoint GET /v1/service-accounts\n@desc Get Workspace Service Accounts\n@returns(200) {service-accounts: [map]} # Successful Response\n@errors {422: Validation Error}\n\n@endgroup\n\n@group workspace\n@endpoint GET /v1/workspace/groups\n@desc Get All Groups\n@returns(200) Successful Response\n@errors {422: Validation Error}\n\n@endpoint GET /v1/workspace/groups/search\n@desc Search User Groups\n@required {name: str # Name of the target group.}\n@returns(200) Successful Response\n@errors {422: Validation Error}\n\n@endpoint POST /v1/workspace/groups/{group_id}/members/remove\n@desc Delete Member From User Group\n@required {group_id: str # The ID of the target group., email: str # The email of the target workspace member.}\n@returns(200) {status: str} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint POST /v1/workspace/groups/{group_id}/members\n@desc Add Member To User Group\n@required {group_id: str # The ID of the target group., email: str # The email of the target workspace member.}\n@returns(200) {status: str} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint POST /v1/workspace/invites/add\n@desc Invite User\n@required {email: str # The email of the customer}\n@optional {workspace_permission: any # The workspace permission of the user. This is deprecated, use `seat_type` instead., seat_type: any # The seat type of the user, group_ids: any # The group ids of the user}\n@returns(200) {status: str} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint POST /v1/workspace/invites/add-bulk\n@desc Invite Multiple Users\n@required {emails: [str] # The email of the customer}\n@optional {seat_type: any # The seat type of the user, group_ids: any # The group ids of the user}\n@returns(200) {status: str} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint DELETE /v1/workspace/invites\n@desc Delete Existing Invitation\n@required {email: str # The email of the customer}\n@returns(200) {status: str} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint POST /v1/workspace/members\n@desc Update Member\n@required {email: str # Email of the target user.}\n@optional {is_locked: any # Whether to lock or unlock the user account., workspace_role: any # The workspace role of the user. This is deprecated, use `workspace_seat_type` instead., workspace_seat_type: any # The workspace seat type}\n@returns(200) {status: str} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint GET /v1/workspace/resources/{resource_id}\n@desc Get Resource\n@required {resource_id: str # The ID of the target resource., resource_type: str # Resource type of the target resource.}\n@returns(200) {resource_id: str, resource_name: any, resource_type: str, creator_user_id: any, anonymous_access_level_override: any, role_to_group_ids: map, share_options: [map]} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint POST /v1/workspace/resources/{resource_id}/share\n@desc Share Workspace Resource\n@required {resource_id: str # The ID of the target resource., role: str(admin/editor/commenter/viewer) # Role to update the target principal with., resource_type: str(voice/voice_collection/pronunciation_dictionary/dubbing/project/convai_agents/convai_knowledge_base_documents/convai_tools/convai_settings/convai_secrets/workspace_auth_connections/convai_phone_numbers/convai_mcp_servers/convai_api_integration_connections/convai_api_integration_trigger_connections/convai_batch_calls/convai_agent_response_tests/convai_test_suite_invocations/convai_crawl_jobs/convai_crawl_tasks/convai_whatsapp_accounts/convai_agent_versions/convai_agent_branches/convai_agent_versions_deployments/convai_memory_entries/convai_coaching_proposals/dashboard/dashboard_configuration/convai_agent_drafts/resource_locators/assets/content_generations/content_templates/songs/avatars/avatar_video_generations) # Resource types that can be shared in the workspace. The name always need to match the collection names}\n@optional {user_email: any # The email of the user or service account., group_id: any # The ID of the target group. To target the permissions principals have by default on this resource, use the value 'default'., workspace_api_key_id: any # The ID of the target workspace API key. This isn't the same as the key itself that would you pass in the header for authentication. Workspace admins can find this in the workspace settings UI.}\n@returns(200) Successful Response\n@errors {422: Validation Error}\n\n@endpoint POST /v1/workspace/resources/{resource_id}/unshare\n@desc Unshare Workspace Resource\n@required {resource_id: str # The ID of the target resource., resource_type: str(voice/voice_collection/pronunciation_dictionary/dubbing/project/convai_agents/convai_knowledge_base_documents/convai_tools/convai_settings/convai_secrets/workspace_auth_connections/convai_phone_numbers/convai_mcp_servers/convai_api_integration_connections/convai_api_integration_trigger_connections/convai_batch_calls/convai_agent_response_tests/convai_test_suite_invocations/convai_crawl_jobs/convai_crawl_tasks/convai_whatsapp_accounts/convai_agent_versions/convai_agent_branches/convai_agent_versions_deployments/convai_memory_entries/convai_coaching_proposals/dashboard/dashboard_configuration/convai_agent_drafts/resource_locators/assets/content_generations/content_templates/songs/avatars/avatar_video_generations) # Resource types that can be shared in the workspace. The name always need to match the collection names}\n@optional {user_email: any # The email of the user or service account., group_id: any # The ID of the target group. To target the permissions principals have by default on this resource, use the value 'default'., workspace_api_key_id: any # The ID of the target workspace API key. This isn't the same as the key itself that would you pass in the header for authentication. Workspace admins can find this in the workspace settings UI.}\n@returns(200) Successful Response\n@errors {422: Validation Error}\n\n@endpoint GET /v1/workspace/webhooks\n@desc List Workspace Webhooks\n@optional {include_usages: bool=false # Whether to include active usages of the webhook, only usable by admins}\n@returns(200) {webhooks: [map]} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint POST /v1/workspace/webhooks\n@desc Create Workspace Webhook\n@required {settings: map{auth_type!: str, name!: str, webhook_url!: str} # Settings for creating an HMAC-authenticated webhook}\n@returns(200) {webhook_id: str, webhook_secret: any} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint PATCH /v1/workspace/webhooks/{webhook_id}\n@desc Update Workspace Webhook\n@required {webhook_id: str # The unique ID for the webhook, is_disabled: bool # Whether to disable or enable the webhook, name: str # The display name of the webhook (used for display purposes only).}\n@optional {retry_enabled: any # Whether to enable automatic retries for transient failures (5xx, 429, timeout)}\n@returns(200) {status: str} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint DELETE /v1/workspace/webhooks/{webhook_id}\n@desc Delete Workspace Webhook\n@required {webhook_id: str # The unique ID for the webhook}\n@returns(200) {status: str} # Successful Response\n@errors {422: Validation Error}\n\n@endgroup\n\n@group speech-to-text\n@endpoint POST /v1/speech-to-text\n@desc Speech To Text\n@optional {enable_logging: bool=true # When enable_logging is set to false zero retention mode will be used for the request. This will mean log and transcript storage features are unavailable for this request. Zero retention mode may only be used by enterprise customers.}\n@returns(200) Synchronous transcription result\n@returns(202) Asynchronous request accepted\n@errors {422: Validation Error}\n\n@endpoint GET /v1/speech-to-text/transcripts/{transcription_id}\n@desc Get Transcript By Id\n@required {transcription_id: str # The unique ID of the transcript to retrieve}\n@returns(200) The transcript data\n@errors {401: Authentication required, 404: Transcript not found, 422: Validation Error}\n\n@endpoint DELETE /v1/speech-to-text/transcripts/{transcription_id}\n@desc Delete Transcript By Id\n@required {transcription_id: str # The unique ID of the transcript to delete}\n@returns(200) Delete completed successfully.\n@errors {401: Authentication required, 422: Validation Error}\n\n@endgroup\n\n@group single-use-token\n@endpoint POST /v1/single-use-token/{token_type}\n@desc Create Single Use Token\n@required {token_type: str}\n@returns(200) {token: str} # Successful Response\n@errors {422: Validation Error}\n\n@endgroup\n\n@group forced-alignment\n@endpoint POST /v1/forced-alignment\n@desc Create Forced Alignment\n@returns(200) {characters: [map], words: [map], loss: num} # Successful Response\n@errors {422: Validation Error}\n\n@endgroup\n\n@group convai\n@endpoint GET /v1/convai/conversation/get-signed-url\n@desc Get Signed Url\n@required {agent_id: str # The id of the agent you're taking the action on.}\n@optional {include_conversation_id: bool=false # Whether to include a conversation_id with the response. If included, the conversation_signature cannot be used again., branch_id: any # The ID of the branch to use, environment: any # The environment to use for resolving environment variables (e.g. 'production', 'staging'). Defaults to 'production'.}\n@returns(200) {signed_url: str} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint GET /v1/convai/conversation/get_signed_url\n@desc Get Signed Url\n@required {agent_id: str # The id of the agent you're taking the action on.}\n@optional {include_conversation_id: bool=false # Whether to include a conversation_id with the response. If included, the conversation_signature cannot be used again., branch_id: any # The ID of the branch to use, environment: any # The environment to use for resolving environment variables (e.g. 'production', 'staging'). Defaults to 'production'.}\n@returns(200) {signed_url: str} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint GET /v1/convai/conversation/token\n@desc Get Webrtc Token\n@required {agent_id: str # The id of the agent you're taking the action on.}\n@optional {participant_name: any # Optional custom participant name. If not provided, user ID will be used, branch_id: any # The ID of the branch to use, environment: any # The environment to use for resolving environment variables (e.g. 'production', 'staging'). Defaults to 'production'.}\n@returns(200) {token: str} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint POST /v1/convai/twilio/outbound-call\n@desc Handle An Outbound Call Via Twilio\n@required {agent_id: str, agent_phone_number_id: str, to_number: str}\n@optional {conversation_initiation_client_data: any, call_recording_enabled: any # Whether let Twilio record the call., telephony_call_config: map{ringing_timeout_secs: int}}\n@returns(200) {success: bool, message: str, conversation_id: any, callSid: any} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint POST /v1/convai/twilio/register-call\n@desc Register A Twilio Call And Return Twiml\n@required {agent_id: str, from_number: str, to_number: str}\n@optional {direction: str(inbound/outbound)=inbound, conversation_initiation_client_data: any}\n@returns(200) Successful Response\n@errors {422: Validation Error}\n\n@endpoint POST /v1/convai/whatsapp/outbound-call\n@desc Make An Outbound Call Via Whatsapp\n@required {whatsapp_phone_number_id: str, whatsapp_user_id: str, whatsapp_call_permission_request_template_name: str, whatsapp_call_permission_request_template_language_code: str, agent_id: str}\n@optional {conversation_initiation_client_data: any}\n@returns(200) {success: bool, message: str, conversation_id: any} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint POST /v1/convai/whatsapp/outbound-message\n@desc Send An Outbound Message Via Whatsapp\n@required {whatsapp_phone_number_id: str, whatsapp_user_id: str, template_name: str, template_language_code: str, template_params: [any], agent_id: str}\n@optional {conversation_initiation_client_data: any}\n@returns(200) {conversation_id: str} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint POST /v1/convai/agents/create\n@desc Create Agent\n@required {conversation_config: map{asr: map, turn: map, tts: map, conversation: map, language_presets: map, vad: map, agent: map}}\n@optional {enable_versioning: bool=false # Enable versioning for the agent, platform_settings: any # Platform settings for the agent are all settings that aren't related to the conversation orchestration and content., workflow: map{edges: map, nodes: map, prevent_subagent_loops: bool}, name: any # A name to make the agent easier to find, tags: any # Tags to help classify and filter the agent}\n@returns(200) {agent_id: str} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint GET /v1/convai/agents/summaries\n@desc Get Agent Summaries\n@required {agent_ids: [str] # List of agent IDs to fetch summaries for}\n@returns(200) Successful Response\n@errors {422: Validation Error}\n\n@endpoint GET /v1/convai/agents/{agent_id}\n@desc Get Agent\n@required {agent_id: str # The id of an agent. This is returned on agent creation.}\n@optional {version_id: any # The ID of the agent version to use, branch_id: any # The ID of the branch to use}\n@returns(200) {agent_id: str, name: str, conversation_config: map{asr: map{quality: str, provider: str, user_input_audio_format: str, keywords: [str]}, turn: map{turn_timeout: num, initial_wait_time: any, silence_end_call_timeout: num, soft_timeout_config: map{timeout_seconds: num, message: str, use_llm_generated_message: bool}, mode: str, turn_eagerness: str, spelling_patience: str, speculative_turn: bool}, tts: map{model_id: str, voice_id: str, supported_voices: [map], expressive_mode: bool, suggested_audio_tags: [map], agent_output_audio_format: str, optimize_streaming_latency: int, stability: num, speed: num, similarity_boost: num, text_normalisation_type: str, pronunciation_dictionary_locators: [map]}, conversation: map{text_only: bool, max_duration_seconds: int, client_events: [str], file_input: map{enabled: bool, max_files_per_conversation: int}, monitoring_enabled: bool, monitoring_events: [str]}, language_presets: map, vad: map{background_voice_detection: bool}, agent: map{first_message: str, language: str, hinglish_mode: bool, dynamic_variables: map{dynamic_variable_placeholders: map}, disable_first_message_interruptions: bool, max_conversation_duration_message: str, prompt: map{prompt: str, llm: str, reasoning_effort: any, thinking_budget: any, temperature: num, max_tokens: int, tool_ids: [str], built_in_tools: map, mcp_server_ids: [str], native_mcp_server_ids: [str], knowledge_base: [map], custom_llm: any, ignore_default_personality: any, rag: map, timezone: any, backup_llm_config: any, cascade_timeout_seconds: num, tools: [any]}}}, metadata: map{created_at_unix_secs: int, updated_at_unix_secs: int}, platform_settings: map{evaluation: map{criteria: [map]}, widget: map{variant: str, placement: str, expandable: str, avatar: any, feedback_mode: str, end_feedback: any, bg_color: str, text_color: str, btn_color: str, btn_text_color: str, border_color: str, focus_color: str, border_radius: any, btn_radius: any, action_text: any, start_call_text: any, end_call_text: any, expand_text: any, listening_text: any, speaking_text: any, shareable_page_text: any, shareable_page_show_terms: bool, terms_text: any, terms_html: any, terms_key: any, show_avatar_when_collapsed: any, disable_banner: bool, override_link: any, markdown_link_allowed_hosts: [map], markdown_link_include_www: bool, markdown_link_allow_http: bool, mic_muting_enabled: bool, transcript_enabled: bool, text_input_enabled: bool, conversation_mode_toggle_enabled: bool, default_expanded: bool, always_expanded: bool, dismissible: bool, show_agent_status: bool, show_conversation_id: bool, strip_audio_tags: bool, syntax_highlight_theme: any, text_contents: map{main_label: any, start_call: any, start_chat: any, new_call: any, end_call: any, mute_microphone: any, change_language: any, collapse: any, expand: any, copied: any, accept_terms: any, dismiss_terms: any, listening_status: any, speaking_status: any, connecting_status: any, chatting_status: any, input_label: any, input_placeholder: any, input_placeholder_text_only: any, input_placeholder_new_conversation: any, user_ended_conversation: any, agent_ended_conversation: any, conversation_id: any, error_occurred: any, copy_id: any, initiate_feedback: any, request_follow_up_feedback: any, thanks_for_feedback: any, thanks_for_feedback_details: any, follow_up_feedback_placeholder: any, submit: any, go_back: any, send_message: any, text_mode: any, voice_mode: any, switched_to_text_mode: any, switched_to_voice_mode: any, copy: any, download: any, wrap: any, agent_working: any, agent_done: any, agent_error: any}, styles: map{base: any, base_hover: any, base_active: any, base_border: any, base_subtle: any, base_primary: any, base_error: any, accent: any, accent_hover: any, accent_active: any, accent_border: any, accent_subtle: any, accent_primary: any, overlay_padding: any, button_radius: any, input_radius: any, bubble_radius: any, sheet_radius: any, compact_sheet_radius: any, dropdown_sheet_radius: any}, language_selector: bool, supports_text_only: bool, custom_avatar_path: any, language_presets: map}, data_collection: map, overrides: map{conversation_config_override: map{turn: map, tts: map, conversation: map, agent: map}, custom_llm_extra_body: bool, enable_conversation_initiation_client_data_from_webhook: bool}, workspace_overrides: map{conversation_initiation_client_data_webhook: any, webhooks: map{post_call_webhook_id: any, events: [str], send_audio: any}}, testing: map{attached_tests: [map]}, archived: bool, guardrails: map{version: str, focus: map{is_enabled: bool}, prompt_injection: map{is_enabled: bool}, content: map{execution_mode: str, config: map, trigger_action: any}, moderation: any, custom: map{config: map}}, summary_language: any, auth: map{enable_auth: bool, allowlist: [map], require_origin_header: bool, shareable_token: any}, call_limits: map{agent_concurrency_limit: int, daily_limit: int, bursting_enabled: bool}, privacy: map{record_voice: bool, retention_days: int, delete_transcript_and_pii: bool, delete_audio: bool, apply_to_existing_conversations: bool, zero_retention_mode: bool, conversation_history_redaction: map{enabled: bool, entities: [str]}}, safety: map{is_blocked_ivc: bool, is_blocked_non_ivc: bool, ignore_safety_evaluation: bool}}, phone_numbers: [any], whatsapp_accounts: [map], workflow: map{edges: map, nodes: map, prevent_subagent_loops: bool}, access_info: any, tags: [str], version_id: any, branch_id: any, main_branch_id: any} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint PATCH /v1/convai/agents/{agent_id}\n@desc Patches An Agent Settings\n@required {agent_id: str # The id of an agent. This is returned on agent creation.}\n@optional {enable_versioning_if_not_enabled: bool=false # Enable versioning for the agent, if not already enabled, branch_id: any # The ID of the branch to use, conversation_config: any # Conversation configuration for an agent, platform_settings: any # Platform settings for the agent are all settings that aren't related to the conversation orchestration and content., workflow: map{edges: map, nodes: map, prevent_subagent_loops: bool}, name: any # A name to make the agent easier to find, tags: any # Tags to help classify and filter the agent, version_description: any # Description for this version when publishing changes (only applicable for versioned agents)}\n@returns(200) {agent_id: str, name: str, conversation_config: map{asr: map{quality: str, provider: str, user_input_audio_format: str, keywords: [str]}, turn: map{turn_timeout: num, initial_wait_time: any, silence_end_call_timeout: num, soft_timeout_config: map{timeout_seconds: num, message: str, use_llm_generated_message: bool}, mode: str, turn_eagerness: str, spelling_patience: str, speculative_turn: bool}, tts: map{model_id: str, voice_id: str, supported_voices: [map], expressive_mode: bool, suggested_audio_tags: [map], agent_output_audio_format: str, optimize_streaming_latency: int, stability: num, speed: num, similarity_boost: num, text_normalisation_type: str, pronunciation_dictionary_locators: [map]}, conversation: map{text_only: bool, max_duration_seconds: int, client_events: [str], file_input: map{enabled: bool, max_files_per_conversation: int}, monitoring_enabled: bool, monitoring_events: [str]}, language_presets: map, vad: map{background_voice_detection: bool}, agent: map{first_message: str, language: str, hinglish_mode: bool, dynamic_variables: map{dynamic_variable_placeholders: map}, disable_first_message_interruptions: bool, max_conversation_duration_message: str, prompt: map{prompt: str, llm: str, reasoning_effort: any, thinking_budget: any, temperature: num, max_tokens: int, tool_ids: [str], built_in_tools: map, mcp_server_ids: [str], native_mcp_server_ids: [str], knowledge_base: [map], custom_llm: any, ignore_default_personality: any, rag: map, timezone: any, backup_llm_config: any, cascade_timeout_seconds: num, tools: [any]}}}, metadata: map{created_at_unix_secs: int, updated_at_unix_secs: int}, platform_settings: map{evaluation: map{criteria: [map]}, widget: map{variant: str, placement: str, expandable: str, avatar: any, feedback_mode: str, end_feedback: any, bg_color: str, text_color: str, btn_color: str, btn_text_color: str, border_color: str, focus_color: str, border_radius: any, btn_radius: any, action_text: any, start_call_text: any, end_call_text: any, expand_text: any, listening_text: any, speaking_text: any, shareable_page_text: any, shareable_page_show_terms: bool, terms_text: any, terms_html: any, terms_key: any, show_avatar_when_collapsed: any, disable_banner: bool, override_link: any, markdown_link_allowed_hosts: [map], markdown_link_include_www: bool, markdown_link_allow_http: bool, mic_muting_enabled: bool, transcript_enabled: bool, text_input_enabled: bool, conversation_mode_toggle_enabled: bool, default_expanded: bool, always_expanded: bool, dismissible: bool, show_agent_status: bool, show_conversation_id: bool, strip_audio_tags: bool, syntax_highlight_theme: any, text_contents: map{main_label: any, start_call: any, start_chat: any, new_call: any, end_call: any, mute_microphone: any, change_language: any, collapse: any, expand: any, copied: any, accept_terms: any, dismiss_terms: any, listening_status: any, speaking_status: any, connecting_status: any, chatting_status: any, input_label: any, input_placeholder: any, input_placeholder_text_only: any, input_placeholder_new_conversation: any, user_ended_conversation: any, agent_ended_conversation: any, conversation_id: any, error_occurred: any, copy_id: any, initiate_feedback: any, request_follow_up_feedback: any, thanks_for_feedback: any, thanks_for_feedback_details: any, follow_up_feedback_placeholder: any, submit: any, go_back: any, send_message: any, text_mode: any, voice_mode: any, switched_to_text_mode: any, switched_to_voice_mode: any, copy: any, download: any, wrap: any, agent_working: any, agent_done: any, agent_error: any}, styles: map{base: any, base_hover: any, base_active: any, base_border: any, base_subtle: any, base_primary: any, base_error: any, accent: any, accent_hover: any, accent_active: any, accent_border: any, accent_subtle: any, accent_primary: any, overlay_padding: any, button_radius: any, input_radius: any, bubble_radius: any, sheet_radius: any, compact_sheet_radius: any, dropdown_sheet_radius: any}, language_selector: bool, supports_text_only: bool, custom_avatar_path: any, language_presets: map}, data_collection: map, overrides: map{conversation_config_override: map{turn: map, tts: map, conversation: map, agent: map}, custom_llm_extra_body: bool, enable_conversation_initiation_client_data_from_webhook: bool}, workspace_overrides: map{conversation_initiation_client_data_webhook: any, webhooks: map{post_call_webhook_id: any, events: [str], send_audio: any}}, testing: map{attached_tests: [map]}, archived: bool, guardrails: map{version: str, focus: map{is_enabled: bool}, prompt_injection: map{is_enabled: bool}, content: map{execution_mode: str, config: map, trigger_action: any}, moderation: any, custom: map{config: map}}, summary_language: any, auth: map{enable_auth: bool, allowlist: [map], require_origin_header: bool, shareable_token: any}, call_limits: map{agent_concurrency_limit: int, daily_limit: int, bursting_enabled: bool}, privacy: map{record_voice: bool, retention_days: int, delete_transcript_and_pii: bool, delete_audio: bool, apply_to_existing_conversations: bool, zero_retention_mode: bool, conversation_history_redaction: map{enabled: bool, entities: [str]}}, safety: map{is_blocked_ivc: bool, is_blocked_non_ivc: bool, ignore_safety_evaluation: bool}}, phone_numbers: [any], whatsapp_accounts: [map], workflow: map{edges: map, nodes: map, prevent_subagent_loops: bool}, access_info: any, tags: [str], version_id: any, branch_id: any, main_branch_id: any} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint DELETE /v1/convai/agents/{agent_id}\n@desc Delete Agent\n@required {agent_id: str # The id of an agent. This is returned on agent creation.}\n@returns(200) Successful Response\n@returns(204) Agent successfully deleted\n@errors {422: Validation Error}\n\n@endpoint GET /v1/convai/agents/{agent_id}/widget\n@desc Get Agent Widget Config\n@required {agent_id: str # The id of an agent. This is returned on agent creation.}\n@optional {conversation_signature: any # An expiring token that enables a websocket conversation to start. These can be generated for an agent using the /v1/convai/conversation/get-signed-url endpoint}\n@returns(200) {agent_id: str, widget_config: map{variant: str, placement: str, expandable: str, avatar: any, feedback_mode: str, end_feedback: any, bg_color: str, text_color: str, btn_color: str, btn_text_color: str, border_color: str, focus_color: str, border_radius: any, btn_radius: any, action_text: any, start_call_text: any, end_call_text: any, expand_text: any, listening_text: any, speaking_text: any, shareable_page_text: any, shareable_page_show_terms: bool, terms_text: any, terms_html: any, terms_key: any, show_avatar_when_collapsed: any, disable_banner: bool, override_link: any, markdown_link_allowed_hosts: [map], markdown_link_include_www: bool, markdown_link_allow_http: bool, mic_muting_enabled: bool, transcript_enabled: bool, text_input_enabled: bool, conversation_mode_toggle_enabled: bool, default_expanded: bool, always_expanded: bool, dismissible: bool, show_agent_status: bool, show_conversation_id: bool, strip_audio_tags: bool, syntax_highlight_theme: any, text_contents: map{main_label: any, start_call: any, start_chat: any, new_call: any, end_call: any, mute_microphone: any, change_language: any, collapse: any, expand: any, copied: any, accept_terms: any, dismiss_terms: any, listening_status: any, speaking_status: any, connecting_status: any, chatting_status: any, input_label: any, input_placeholder: any, input_placeholder_text_only: any, input_placeholder_new_conversation: any, user_ended_conversation: any, agent_ended_conversation: any, conversation_id: any, error_occurred: any, copy_id: any, initiate_feedback: any, request_follow_up_feedback: any, thanks_for_feedback: any, thanks_for_feedback_details: any, follow_up_feedback_placeholder: any, submit: any, go_back: any, send_message: any, text_mode: any, voice_mode: any, switched_to_text_mode: any, switched_to_voice_mode: any, copy: any, download: any, wrap: any, agent_working: any, agent_done: any, agent_error: any}, styles: map{base: any, base_hover: any, base_active: any, base_border: any, base_subtle: any, base_primary: any, base_error: any, accent: any, accent_hover: any, accent_active: any, accent_border: any, accent_subtle: any, accent_primary: any, overlay_padding: any, button_radius: any, input_radius: any, bubble_radius: any, sheet_radius: any, compact_sheet_radius: any, dropdown_sheet_radius: any}, language: str, supported_language_overrides: any, language_presets: map, text_only: bool, supports_text_only: bool, first_message: any, use_rtc: any}} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint GET /v1/convai/agents/{agent_id}/link\n@desc Get Shareable Agent Link\n@required {agent_id: str # The id of an agent. This is returned on agent creation.}\n@returns(200) {agent_id: str, token: any} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint POST /v1/convai/agents/{agent_id}/avatar\n@desc Post Agent Avatar\n@required {agent_id: str # The id of an agent. This is returned on agent creation.}\n@returns(200) {agent_id: str, avatar_url: any} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint GET /v1/convai/agents\n@desc List Agents\n@optional {page_size: int=30 # How many Agents to return at maximum. Can not exceed 100, defaults to 30., search: any # Search by agents name., archived: any=false # Filter agents by archived status, show_only_owned_agents: bool=false # If set to true, the endpoint will omit any agents that were shared with you by someone else and include only the ones you own. Deprecated: use created_by_user_id instead., created_by_user_id: any # Filter agents by creator user ID. When set, only agents created by this user are returned. Takes precedence over show_only_owned_agents. Use '@me' to refer to the authenticated user., sort_direction: str=desc # The direction to sort the results, sort_by: any # The field to sort the results by, cursor: any # Used for fetching next page. Cursor is returned in the response.}\n@returns(200) {agents: [map], next_cursor: any, has_more: bool} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint GET /v1/convai/agent/{agent_id}/knowledge-base/size\n@desc Returns The Size Of The Agent'S Knowledge Base\n@required {agent_id: str}\n@returns(200) {number_of_pages: num} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint POST /v1/convai/agent/{agent_id}/llm-usage/calculate\n@desc Calculate Expected Llm Usage For An Agent\n@required {agent_id: str}\n@optional {prompt_length: any # Length of the prompt in characters., number_of_pages: any # Pages of content in pdf documents OR urls in agent's Knowledge Base., rag_enabled: any # Whether RAG is enabled.}\n@returns(200) {llm_prices: [map]} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint POST /v1/convai/agents/{agent_id}/duplicate\n@desc Duplicate Agent\n@required {agent_id: str # The id of an agent. This is returned on agent creation.}\n@optional {name: any # A name to make the agent easier to find}\n@returns(200) {agent_id: str} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint POST /v1/convai/agents/{agent_id}/simulate-conversation\n@desc Simulates A Conversation\n@required {agent_id: str # The id of an agent. This is returned on agent creation., simulation_specification: map{simulated_user_config!: map, tool_mock_config: map, partial_conversation_history: [map], dynamic_variables: map} # A specification that will be used to simulate a conversation between an agent and an AI user.}\n@optional {extra_evaluation_criteria: any # A list of evaluation criteria to test, new_turns_limit: int=10000 # Maximum number of new turns to generate in the conversation simulation}\n@returns(200) {simulated_conversation: [map], analysis: map{evaluation_criteria_results: map, data_collection_results: map, evaluation_criteria_results_list: [map], data_collection_results_list: [map], call_successful: str, transcript_summary: str, call_summary_title: any}} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint POST /v1/convai/agents/{agent_id}/simulate-conversation/stream\n@desc Simulates A Conversation (Stream)\n@required {agent_id: str # The id of an agent. This is returned on agent creation., simulation_specification: map{simulated_user_config!: map, tool_mock_config: map, partial_conversation_history: [map], dynamic_variables: map} # A specification that will be used to simulate a conversation between an agent and an AI user.}\n@optional {extra_evaluation_criteria: any # A list of evaluation criteria to test, new_turns_limit: int=10000 # Maximum number of new turns to generate in the conversation simulation}\n@returns(200) Successful Response\n@errors {422: Validation Error}\n\n@endpoint POST /v1/convai/agent-testing/create\n@desc Create Agent Response Test\n@returns(200) {id: str} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint GET /v1/convai/agent-testing/{test_id}\n@desc Get Agent Response Test By Id\n@required {test_id: str # The id of a chat response test. This is returned on test creation.}\n@returns(200) Successful Response\n@errors {422: Validation Error}\n\n@endpoint PUT /v1/convai/agent-testing/{test_id}\n@desc Update Agent Response Test\n@required {test_id: str # The id of a chat response test. This is returned on test creation.}\n@returns(200) Successful Response\n@errors {422: Validation Error}\n\n@endpoint DELETE /v1/convai/agent-testing/{test_id}\n@desc Delete Agent Response Test\n@required {test_id: str # The id of a chat response test. This is returned on test creation.}\n@returns(200) Successful Response\n@errors {422: Validation Error}\n\n@endpoint POST /v1/convai/agent-testing/summaries\n@desc Get Agent Response Test Summaries By Ids\n@required {test_ids: [str] # List of test IDs to fetch. No duplicates allowed.}\n@returns(200) {tests: map} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint GET /v1/convai/agent-testing\n@desc List Agent Response Tests\n@optional {cursor: any # Used for fetching next page. Cursor is returned in the response., page_size: int=30 # How many Tests to return at maximum. Can not exceed 100, defaults to 30., search: any # Search query to filter tests by name., parent_folder_id: any # Filter by parent folder ID. Use 'root' to get items in the root folder., types: any # If present, the endpoint will return only tests/folders of the given types., include_folders: any # Deprecated. Use the `types` query param and include `folder` instead., sort_mode: str(default/folders_first)=default # Sort mode for listing tests. Use 'folders_first' to place folders before tests.}\n@returns(200) {tests: [map], next_cursor: any, has_more: bool} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint GET /v1/convai/test-invocations\n@desc List Test Invocations\n@required {agent_id: str # Filter by agent ID}\n@optional {page_size: int=30 # How many Tests to return at maximum. Can not exceed 100, defaults to 30., cursor: any # Used for fetching next page. Cursor is returned in the response.}\n@returns(200) {meta: map{total: any, page: any, page_size: any}, results: [map], next_cursor: any, has_more: bool} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint POST /v1/convai/agents/{agent_id}/run-tests\n@desc Run Tests On The Agent\n@required {agent_id: str # The id of an agent. This is returned on agent creation., tests: [map{test_id!: str, workflow_node_id: any, root_folder_id: any, root_folder_name: any}] # List of tests to run on the agent}\n@optional {agent_config_override: any # Configuration overrides to use for testing. If not provided, the agent's default configuration will be used., branch_id: any # ID of the branch to run the tests on. If not provided, the tests will be run on the agent default configuration.}\n@returns(200) {id: str, agent_id: any, branch_id: any, created_at: int, folder_id: any, test_runs: [map]} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint GET /v1/convai/test-invocations/{test_invocation_id}\n@desc Get Test Invocation\n@required {test_invocation_id: str # The id of a test invocation. This is returned when tests are run.}\n@returns(200) {id: str, agent_id: any, branch_id: any, created_at: int, folder_id: any, test_runs: [map]} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint POST /v1/convai/test-invocations/{test_invocation_id}/resubmit\n@desc Resubmit Tests\n@required {test_invocation_id: str # The id of a test invocation. This is returned when tests are run., test_run_ids: [str] # List of test run IDs to resubmit, agent_id: str # Agent ID to resubmit tests for}\n@optional {agent_config_override: any # Configuration overrides to use for testing. If not provided, the agent's default configuration will be used., branch_id: any # ID of the branch to run the tests on. If not provided, the tests will be run on the agent default configuration.}\n@returns(200) Successful Response\n@errors {422: Validation Error}\n\n@endpoint GET /v1/convai/conversations\n@desc Get Conversations\n@optional {cursor: any # Used for fetching next page. Cursor is returned in the response., agent_id: any # The id of the agent you're taking the action on., call_successful: any # The result of the success evaluation, call_start_before_unix: any # Unix timestamp (in seconds) to filter conversations up to this start date., call_start_after_unix: any # Unix timestamp (in seconds) to filter conversations after to this start date., call_duration_min_secs: any # Minimum call duration in seconds., call_duration_max_secs: any # Maximum call duration in seconds., rating_max: any # Maximum overall rating (1-5)., rating_min: any # Minimum overall rating (1-5)., has_feedback_comment: any # Filter conversations with user feedback comments., user_id: any # Filter conversations by the user ID who initiated them., evaluation_params: any # Evaluation filters. Repeat param. Format: criteria_id:result. Example: eval=value_framing:success, data_collection_params: any # Data collection filters. Repeat param. Format: id:op:value where op is one of eq|neq|gt|gte|lt|lte|in|exists|missing. For in, pipe-delimit values., tool_names: any # Filter conversations by tool names used during the call., tool_names_successful: any # Filter conversations by tool names that had successful calls., tool_names_errored: any # Filter conversations by tool names that had errored calls., main_languages: any # Filter conversations by detected main language (language code)., page_size: int=30 # How many conversations to return at maximum. Can not exceed 100, defaults to 30., summary_mode: str(exclude/include)=exclude # Whether to include transcript summaries in the response., search: any # Full-text or fuzzy search over transcript messages, conversation_initiation_source: any, branch_id: any # Filter conversations by branch ID.}\n@returns(200) {conversations: [map], next_cursor: any, has_more: bool} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint GET /v1/convai/users\n@desc Get Conversation Users\n@optional {agent_id: any # The id of the agent you're taking the action on., branch_id: any # Filter conversations by branch ID., call_start_before_unix: any # Unix timestamp (in seconds) to filter conversations up to this start date., call_start_after_unix: any # Unix timestamp (in seconds) to filter conversations after to this start date., search: any # Search/filter by user ID (exact match)., page_size: int=30 # How many users to return at maximum. Defaults to 30., sort_by: str=last_contact_unix_secs # The field to sort the results by. Defaults to last_contact_unix_secs., cursor: any # Used for fetching next page. Cursor is returned in the response.}\n@returns(200) {users: [map], next_cursor: any, has_more: bool} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint GET /v1/convai/conversations/{conversation_id}\n@desc Get Conversation Details\n@required {conversation_id: str # The id of the conversation you're taking the action on.}\n@returns(200) {agent_id: str, agent_name: any, status: str, user_id: any, branch_id: any, version_id: any, metadata: map{start_time_unix_secs: int, accepted_time_unix_secs: any, call_duration_secs: int, cost: any, deletion_settings: map{deletion_time_unix_secs: any, deleted_logs_at_time_unix_secs: any, deleted_audio_at_time_unix_secs: any, deleted_transcript_at_time_unix_secs: any, delete_transcript_and_pii: bool, delete_audio: bool}, feedback: map{type: any, overall_score: any, likes: int, dislikes: int, rating: any, comment: any}, authorization_method: str, charging: map{dev_discount: bool, is_burst: bool, tier: any, llm_usage: map{irreversible_generation: map, initiated_generation: map}, llm_price: any, llm_charge: any, call_charge: any, free_minutes_consumed: num, free_llm_dollars_consumed: num}, phone_call: any, batch_call: any, termination_reason: str, error: any, warnings: [str], main_language: any, rag_usage: any, text_only: bool, features_usage: map{language_detection: map{enabled: bool, used: bool}, transfer_to_agent: map{enabled: bool, used: bool}, transfer_to_number: map{enabled: bool, used: bool}, multivoice: map{enabled: bool, used: bool}, dtmf_tones: map{enabled: bool, used: bool}, external_mcp_servers: map{enabled: bool, used: bool}, pii_zrm_workspace: bool, pii_zrm_agent: bool, tool_dynamic_variable_updates: map{enabled: bool, used: bool}, is_livekit: bool, voicemail_detection: map{enabled: bool, used: bool}, workflow: map{enabled: bool, tool_node: map, standalone_agent_node: map, phone_number_node: map, end_node: map}, agent_testing: map{enabled: bool, tests_ran_after_last_modification: bool, tests_ran_in_last_7_days: bool}, versioning: map{enabled: bool, used: bool}, file_input: map{enabled: bool, used: bool}}, eleven_assistant: map{is_eleven_assistant: bool}, initiator_id: any, conversation_initiation_source: str, conversation_initiation_source_version: any, timezone: any, async_metadata: any, whatsapp: any, agent_created_from: str, agent_last_updated_from: str}, analysis: any, conversation_initiation_client_data: map{conversation_config_override: map{turn: any, tts: any, conversation: any, agent: any}, custom_llm_extra_body: map, user_id: any, source_info: map{source: any, version: any}, branch_id: any, environment: any, dynamic_variables: map}, environment: str, conversation_id: str, has_audio: bool, has_user_audio: bool, has_response_audio: bool, transcript: [map]} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint DELETE /v1/convai/conversations/{conversation_id}\n@desc Delete Conversation\n@required {conversation_id: str # The id of the conversation you're taking the action on.}\n@returns(200) Successful Response\n@errors {422: Validation Error}\n\n@endpoint GET /v1/convai/conversations/{conversation_id}/audio\n@desc Get Conversation Audio\n@required {conversation_id: str # The id of the conversation you're taking the action on.}\n@returns(200) Successful Response\n@errors {422: Validation Error}\n\n@endpoint POST /v1/convai/conversations/{conversation_id}/feedback\n@desc Send Conversation Feedback\n@required {conversation_id: str # The id of the conversation you're taking the action on.}\n@optional {feedback: any # Either 'like' or 'dislike' to indicate the feedback for the conversation.}\n@returns(200) Successful Response\n@errors {422: Validation Error}\n\n@endpoint GET /v1/convai/conversations/messages/text-search\n@desc Text Search Conversation Messages\n@required {text_query: str # The search query text for full-text and fuzzy matching}\n@optional {agent_id: any # The id of the agent you're taking the action on., call_successful: any # The result of the success evaluation, call_start_before_unix: any # Unix timestamp (in seconds) to filter conversations up to this start date., call_start_after_unix: any # Unix timestamp (in seconds) to filter conversations after to this start date., call_duration_min_secs: any # Minimum call duration in seconds., call_duration_max_secs: any # Maximum call duration in seconds., rating_max: any # Maximum overall rating (1-5)., rating_min: any # Minimum overall rating (1-5)., has_feedback_comment: any # Filter conversations with user feedback comments., user_id: any # Filter conversations by the user ID who initiated them., evaluation_params: any # Evaluation filters. Repeat param. Format: criteria_id:result. Example: eval=value_framing:success, data_collection_params: any # Data collection filters. Repeat param. Format: id:op:value where op is one of eq|neq|gt|gte|lt|lte|in|exists|missing. For in, pipe-delimit values., tool_names: any # Filter conversations by tool names used during the call., tool_names_successful: any # Filter conversations by tool names that had successful calls., tool_names_errored: any # Filter conversations by tool names that had errored calls., main_languages: any # Filter conversations by detected main language (language code)., page_size: int=20 # Number of results per page. Max 50., summary_mode: str(exclude/include)=exclude # Whether to include transcript summaries in the response., conversation_initiation_source: any, branch_id: any # Filter conversations by branch ID., sort_by: str=search_score # Sort order for search results. 'search_score' sorts by search score, 'created_at' sorts by conversation start time., cursor: any # Used for fetching next page. Cursor is returned in the response.}\n@returns(200) {meta: map{total: any, page: any, page_size: any}, results: [map], next_cursor: any, has_more: bool} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint GET /v1/convai/conversations/messages/smart-search\n@desc Smart Search Conversation Messages\n@required {text_query: str # The search query text for semantic similarity matching}\n@optional {agent_id: any # The id of the agent you're taking the action on., page_size: int=20 # Number of results per page. Max 50., cursor: any # Used for fetching next page. Cursor is returned in the response.}\n@returns(200) {meta: map{total: any, page: any, page_size: any}, results: [map], next_cursor: any, has_more: bool} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint POST /v1/convai/phone-numbers\n@desc Import Phone Number\n@returns(200) {phone_number_id: str} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint GET /v1/convai/phone-numbers\n@desc List Phone Numbers\n@returns(200) Successful Response\n@errors {422: Validation Error}\n\n@endpoint GET /v1/convai/phone-numbers/{phone_number_id}\n@desc Get Phone Number\n@required {phone_number_id: str # The id of an agent. This is returned on agent creation.}\n@returns(200) Successful Response\n@errors {422: Validation Error}\n\n@endpoint DELETE /v1/convai/phone-numbers/{phone_number_id}\n@desc Delete Phone Number\n@required {phone_number_id: str # The id of an agent. This is returned on agent creation.}\n@returns(200) Successful Response\n@errors {422: Validation Error}\n\n@endpoint PATCH /v1/convai/phone-numbers/{phone_number_id}\n@desc Update Phone Number\n@required {phone_number_id: str # The id of an agent. This is returned on agent creation.}\n@optional {agent_id: any, label: any, inbound_trunk_config: any, outbound_trunk_config: any, livekit_stack: any}\n@returns(200) Successful Response\n@errors {422: Validation Error}\n\n@endpoint POST /v1/convai/llm-usage/calculate\n@desc Calculate Expected Llm Usage\n@required {prompt_length: int # Length of the prompt in characters., number_of_pages: int # Pages of content in PDF documents or URLs in the agent's knowledge base., rag_enabled: bool # Whether RAG is enabled.}\n@returns(200) {llm_prices: [map]} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint GET /v1/convai/llm/list\n@desc List Available Llms\n@returns(200) {llms: [map], default_deprecation_config: map{warning_start_days: int, fallback_start_days: int, fallback_complete_days: int, fallback_start_percentage: int, fallback_complete_percentage: int}} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint POST /v1/convai/conversations/{conversation_id}/files\n@desc Upload File\n@required {conversation_id: str}\n@returns(200) {file_id: str} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint DELETE /v1/convai/conversations/{conversation_id}/files/{file_id}\n@desc Delete File Upload\n@required {file_id: str, conversation_id: str}\n@returns(200) {file_id: str} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint GET /v1/convai/analytics/live-count\n@desc Get Live Count\n@optional {agent_id: any # The id of an agent to restrict the analytics to.}\n@returns(200) {count: int} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint GET /v1/convai/knowledge-base/summaries\n@desc Get Knowledge Base Summaries By Ids\n@required {document_ids: [str] # The ids of knowledge base documents.}\n@returns(200) Successful Response\n@errors {422: Validation Error}\n\n@endpoint POST /v1/convai/knowledge-base\n@desc Add To Knowledge Base\n@optional {agent_id: str=}\n@returns(200) {id: str, name: str, folder_path: [map]} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint GET /v1/convai/knowledge-base\n@desc Get Knowledge Base List\n@optional {page_size: int=30 # How many documents to return at maximum. Can not exceed 100, defaults to 30., search: any # If specified, the endpoint returns only such knowledge base documents whose names start with this string., show_only_owned_documents: bool=false # If set to true, the endpoint will return only documents owned by you (and not shared from somebody else). Deprecated: use created_by_user_id instead., created_by_user_id: any # Filter documents by creator user ID. When set, only documents created by this user are returned. Takes precedence over show_only_owned_documents. Use '@me' to refer to the authenticated user., types: any # If present, the endpoint will return only documents of the given types., parent_folder_id: any # If set, the endpoint will return only documents that are direct children of the given folder., ancestor_folder_id: any # If set, the endpoint will return only documents that are descendants of the given folder., folders_first: bool=false # Whether folders should be returned first in the list of documents., sort_direction: str=desc # The direction to sort the results, sort_by: any # The field to sort the results by, cursor: any # Used for fetching next page. Cursor is returned in the response.}\n@returns(200) {documents: [any], next_cursor: any, has_more: bool} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint POST /v1/convai/knowledge-base/url\n@desc Create Url Document\n@required {url: str # URL to a page of documentation that the agent will have access to in order to interact with users.}\n@optional {name: any # A custom, human-readable name for the document., parent_folder_id: any # If set, the created document or folder will be placed inside the given folder., enable_auto_sync: bool=false # Whether to enable auto-sync for this URL document., auto_remove: bool=false # Whether to automatically remove the document if the URL becomes unavailable. Only applicable when auto-sync is enabled.}\n@returns(200) {id: str, name: str, folder_path: [map]} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint POST /v1/convai/knowledge-base/file\n@desc Create File Document\n@returns(200) {id: str, name: str, folder_path: [map]} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint POST /v1/convai/knowledge-base/text\n@desc Create Text Document\n@required {text: str # Text content to be added to the knowledge base.}\n@optional {name: any # A custom, human-readable name for the document., parent_folder_id: any # If set, the created document or folder will be placed inside the given folder.}\n@returns(200) {id: str, name: str, folder_path: [map]} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint POST /v1/convai/knowledge-base/folder\n@desc Create Folder\n@required {name: str # A custom, human-readable name for the document.}\n@optional {parent_folder_id: any # If set, the created document or folder will be placed inside the given folder., enable_auto_sync: bool=false # Whether to enable auto-sync for this URL document., auto_remove: bool=false # Whether to automatically remove the document if the URL becomes unavailable. Only applicable when auto-sync is enabled.}\n@returns(200) {id: str, name: str, folder_path: [map]} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint PATCH /v1/convai/knowledge-base/{documentation_id}\n@desc Update Document\n@required {documentation_id: str # The id of a document from the knowledge base. This is returned on document addition., name: str # A custom, human-readable name for the document.}\n@returns(200) Successful Response\n@errors {422: Validation Error}\n\n@endpoint GET /v1/convai/knowledge-base/{documentation_id}\n@desc Get Documentation From Knowledge Base\n@required {documentation_id: str # The id of a document from the knowledge base. This is returned on document addition.}\n@optional {agent_id: str=}\n@returns(200) Successful Response\n@errors {422: Validation Error}\n\n@endpoint DELETE /v1/convai/knowledge-base/{documentation_id}\n@desc Delete Knowledge Base Document Or Folder\n@required {documentation_id: str # The id of a document from the knowledge base. This is returned on document addition.}\n@optional {force: bool=false # If set to true, the document or folder will be deleted regardless of whether it is used by any agents and it will be removed from the dependent agents. For non-empty folders, this will also delete all child documents and folders.}\n@returns(200) Successful Response\n@errors {422: Validation Error}\n\n@endpoint POST /v1/convai/knowledge-base/rag-index\n@desc Compute Rag Indexes In Batch\n@required {items: [map{document_id!: str, create_if_missing!: bool, model!: str}] # List of requested RAG indexes. Minimum 1, maximum 100 items.}\n@returns(200) Successful Response\n@errors {422: Validation Error}\n\n@endpoint GET /v1/convai/knowledge-base/rag-index\n@desc Get Rag Index Overview.\n@returns(200) {total_used_bytes: int, total_max_bytes: int, models: [map]} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint POST /v1/convai/knowledge-base/{documentation_id}/refresh\n@desc Refresh Url Document Content\n@required {documentation_id: str # The id of a document from the knowledge base. This is returned on document addition.}\n@returns(200) Successful Response\n@errors {422: Validation Error}\n\n@endpoint POST /v1/convai/knowledge-base/{documentation_id}/rag-index\n@desc Compute Rag Index.\n@required {documentation_id: str # The id of a document from the knowledge base. This is returned on document addition., model: str(e5_mistral_7b_instruct/multilingual_e5_large_instruct)=e5_mistral_7b_instruct}\n@returns(200) {id: str, model: str, status: str, progress_percentage: num, document_model_index_usage: map{used_bytes: int}} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint GET /v1/convai/knowledge-base/{documentation_id}/rag-index\n@desc Get Rag Indexes Of The Specified Knowledgebase Document.\n@required {documentation_id: str # The id of a document from the knowledge base. This is returned on document addition.}\n@returns(200) {indexes: [map]} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint DELETE /v1/convai/knowledge-base/{documentation_id}/rag-index/{rag_index_id}\n@desc Delete Rag Index.\n@required {documentation_id: str # The id of a document from the knowledge base. This is returned on document addition., rag_index_id: str # The id of RAG index of document from the knowledge base.}\n@returns(200) {id: str, model: str, status: str, progress_percentage: num, document_model_index_usage: map{used_bytes: int}} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint GET /v1/convai/knowledge-base/{documentation_id}/dependent-agents\n@desc Get Dependent Agents List\n@required {documentation_id: str # The id of a document from the knowledge base. This is returned on document addition.}\n@optional {dependent_type: str=all # Type of dependent agents to return., page_size: int=30 # How many documents to return at maximum. Can not exceed 100, defaults to 30., cursor: any # Used for fetching next page. Cursor is returned in the response.}\n@returns(200) {agents: [any], branches: [map], next_cursor: any, has_more: bool} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint GET /v1/convai/knowledge-base/{documentation_id}/content\n@desc Get Document Content\n@required {documentation_id: str # The id of a document from the knowledge base. This is returned on document addition.}\n@returns(200) Streaming document content\n@errors {422: Validation Error}\n\n@endpoint GET /v1/convai/knowledge-base/{documentation_id}/source-file-url\n@desc Get Document Source File Url\n@required {documentation_id: str # The id of a document from the knowledge base. This is returned on document addition.}\n@returns(200) {signed_url: str} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint GET /v1/convai/knowledge-base/{documentation_id}/chunk/{chunk_id}\n@desc Get Documentation Chunk From Knowledge Base\n@required {documentation_id: str # The id of a document from the knowledge base. This is returned on document addition., chunk_id: str # The id of a document RAG chunk from the knowledge base.}\n@optional {embedding_model: any # The embedding model used to retrieve the chunk.}\n@returns(200) {id: str, name: str, content: str} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint POST /v1/convai/knowledge-base/{document_id}/move\n@desc Move Entity To Folder\n@required {document_id: str # The id of a document from the knowledge base. This is returned on document addition.}\n@optional {move_to: any # The folder to move the entities to. If not set, the entities will be moved to the root folder.}\n@returns(200) Successful Response\n@errors {422: Validation Error}\n\n@endpoint POST /v1/convai/knowledge-base/bulk-move\n@desc Bulk Move Entities To Folder\n@required {document_ids: [str] # The ids of documents or folders from the knowledge base.}\n@optional {move_to: any # The folder to move the entities to. If not set, the entities will be moved to the root folder.}\n@returns(200) Successful Response\n@errors {422: Validation Error}\n\n@endpoint POST /v1/convai/tools\n@desc Add Tool\n@required {tool_config: any # Configuration for the tool}\n@optional {response_mocks: any # Mock responses with optional parameter conditions. Evaluated top-to-bottom; first match wins.}\n@returns(200) {id: str, tool_config: any, access_info: map{is_creator: bool, creator_name: str, creator_email: str, role: str}, usage_stats: map{total_calls: int, avg_latency_secs: num}, response_mocks: any} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint GET /v1/convai/tools\n@desc Get Tools\n@optional {search: any # If specified, the endpoint returns only tools whose names start with this string., page_size: any # How many documents to return at maximum. Can not exceed 100, defaults to 30., show_only_owned_documents: bool=false # If set to true, the endpoint will return only tools owned by you (and not shared from somebody else). Deprecated: use created_by_user_id instead., created_by_user_id: any # Filter tools by creator user ID. When set, only tools created by this user are returned. Takes precedence over show_only_owned_documents. Use '@me' to refer to the authenticated user., types: any # If present, the endpoint will return only tools of the given types., sort_direction: str=desc # The direction to sort the results, sort_by: any # The field to sort the results by, cursor: any # Used for fetching next page. Cursor is returned in the response.}\n@returns(200) {tools: [map], next_cursor: any, has_more: bool} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint GET /v1/convai/tools/{tool_id}\n@desc Get Tool\n@required {tool_id: str # ID of the requested tool.}\n@returns(200) {id: str, tool_config: any, access_info: map{is_creator: bool, creator_name: str, creator_email: str, role: str}, usage_stats: map{total_calls: int, avg_latency_secs: num}, response_mocks: any} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint PATCH /v1/convai/tools/{tool_id}\n@desc Update Tool\n@required {tool_id: str # ID of the requested tool., tool_config: any # Configuration for the tool}\n@optional {response_mocks: any # Mock responses with optional parameter conditions. Evaluated top-to-bottom; first match wins.}\n@returns(200) {id: str, tool_config: any, access_info: map{is_creator: bool, creator_name: str, creator_email: str, role: str}, usage_stats: map{total_calls: int, avg_latency_secs: num}, response_mocks: any} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint DELETE /v1/convai/tools/{tool_id}\n@desc Delete Tool\n@required {tool_id: str # ID of the requested tool.}\n@optional {force: bool=false # If set to true, the tool will be deleted regardless of whether it is used by any agents and it will be removed from the dependent agents and branches.}\n@returns(200) Successful Response\n@errors {422: Validation Error}\n\n@endpoint GET /v1/convai/tools/{tool_id}/dependent-agents\n@desc Get Dependent Agents List\n@required {tool_id: str # ID of the requested tool.}\n@optional {cursor: any # Used for fetching next page. Cursor is returned in the response., page_size: int=30 # How many documents to return at maximum. Can not exceed 100, defaults to 30.}\n@returns(200) {agents: [any], branches: [map], next_cursor: any, has_more: bool} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint GET /v1/convai/settings\n@desc Get Convai Settings\n@returns(200) {conversation_initiation_client_data_webhook: any, webhooks: map{post_call_webhook_id: any, events: [str], send_audio: any}, can_use_mcp_servers: bool, rag_retention_period_days: int, conversation_embedding_retention_days: any, default_livekit_stack: str} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint PATCH /v1/convai/settings\n@desc Update Convai Settings\n@optional {conversation_initiation_client_data_webhook: any, webhooks: map{post_call_webhook_id: any, events: [str], send_audio: any}, can_use_mcp_servers: bool=false # Whether the workspace can use MCP servers, rag_retention_period_days: int=10, conversation_embedding_retention_days: any # Days to retain conversation embeddings. None means use the system default (30 days)., default_livekit_stack: str(standard/static)=standard}\n@returns(200) {conversation_initiation_client_data_webhook: any, webhooks: map{post_call_webhook_id: any, events: [str], send_audio: any}, can_use_mcp_servers: bool, rag_retention_period_days: int, conversation_embedding_retention_days: any, default_livekit_stack: str} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint GET /v1/convai/settings/dashboard\n@desc Get Convai Dashboard Settings\n@returns(200) {charts: [any]} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint PATCH /v1/convai/settings/dashboard\n@desc Update Convai Dashboard Settings\n@optional {charts: [any]}\n@returns(200) {charts: [any]} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint POST /v1/convai/secrets\n@desc Create Convai Workspace Secret\n@required {type: str, name: str, value: str}\n@returns(200) {type: str, secret_id: str, name: str} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint GET /v1/convai/secrets\n@desc Get Convai Workspace Secrets\n@optional {page_size: any # How many documents to return at maximum. Can not exceed 100. If not provided, returns all secrets., cursor: any # Used for fetching next page. Cursor is returned in the response.}\n@returns(200) {secrets: [map], next_cursor: any, has_more: bool} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint DELETE /v1/convai/secrets/{secret_id}\n@desc Delete Convai Workspace Secret\n@required {secret_id: str}\n@returns(204) Successful Response\n@errors {422: Validation Error}\n\n@endpoint PATCH /v1/convai/secrets/{secret_id}\n@desc Update Convai Workspace Secret\n@required {secret_id: str, type: str, name: str, value: str}\n@returns(200) {type: str, secret_id: str, name: str} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint POST /v1/convai/batch-calling/submit\n@desc Submit A Batch Call Request.\n@required {call_name: str, agent_id: str, recipients: [map{id: any, phone_number: any, whatsapp_user_id: any, conversation_initiation_client_data: any}]}\n@optional {scheduled_time_unix: any, agent_phone_number_id: any, whatsapp_params: any, timezone: any, branch_id: any, environment: any, telephony_call_config: map{ringing_timeout_secs: int}, target_concurrency_limit: any # Maximum number of simultaneous calls for this batch. When set, dispatch is governed by this limit rather than workspace/agent capacity percentages.}\n@returns(200) {id: str, phone_number_id: any, phone_provider: any, whatsapp_params: any, name: str, agent_id: str, branch_id: any, environment: any, created_at_unix: int, scheduled_time_unix: int, timezone: any, total_calls_dispatched: int, total_calls_scheduled: int, total_calls_finished: int, last_updated_at_unix: int, status: str, retry_count: int, telephony_call_config: map{ringing_timeout_secs: int}, target_concurrency_limit: any, agent_name: str, branch_name: any} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint GET /v1/convai/batch-calling/workspace\n@desc Get All Batch Calls For A Workspace.\n@optional {limit: int=100, last_doc: any}\n@returns(200) {batch_calls: [map], next_doc: any, has_more: bool} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint GET /v1/convai/batch-calling/{batch_id}\n@desc Get A Batch Call By Id.\n@required {batch_id: str}\n@returns(200) {id: str, phone_number_id: any, phone_provider: any, whatsapp_params: any, name: str, agent_id: str, branch_id: any, environment: any, created_at_unix: int, scheduled_time_unix: int, timezone: any, total_calls_dispatched: int, total_calls_scheduled: int, total_calls_finished: int, last_updated_at_unix: int, status: str, retry_count: int, telephony_call_config: map{ringing_timeout_secs: int}, target_concurrency_limit: any, agent_name: str, branch_name: any, recipients: [map]} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint DELETE /v1/convai/batch-calling/{batch_id}\n@desc Delete A Batch Call.\n@required {batch_id: str}\n@returns(204) Successful Response\n@errors {422: Validation Error}\n\n@endpoint POST /v1/convai/batch-calling/{batch_id}/cancel\n@desc Cancel A Batch Call.\n@required {batch_id: str}\n@returns(200) {id: str, phone_number_id: any, phone_provider: any, whatsapp_params: any, name: str, agent_id: str, branch_id: any, environment: any, created_at_unix: int, scheduled_time_unix: int, timezone: any, total_calls_dispatched: int, total_calls_scheduled: int, total_calls_finished: int, last_updated_at_unix: int, status: str, retry_count: int, telephony_call_config: map{ringing_timeout_secs: int}, target_concurrency_limit: any, agent_name: str, branch_name: any} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint POST /v1/convai/batch-calling/{batch_id}/retry\n@desc Retry A Batch Call.\n@required {batch_id: str}\n@returns(200) {id: str, phone_number_id: any, phone_provider: any, whatsapp_params: any, name: str, agent_id: str, branch_id: any, environment: any, created_at_unix: int, scheduled_time_unix: int, timezone: any, total_calls_dispatched: int, total_calls_scheduled: int, total_calls_finished: int, last_updated_at_unix: int, status: str, retry_count: int, telephony_call_config: map{ringing_timeout_secs: int}, target_concurrency_limit: any, agent_name: str, branch_name: any} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint POST /v1/convai/sip-trunk/outbound-call\n@desc Handle An Outbound Call Via Sip Trunk\n@required {agent_id: str, agent_phone_number_id: str, to_number: str}\n@optional {conversation_initiation_client_data: any, telephony_call_config: map{ringing_timeout_secs: int}}\n@returns(200) {success: bool, message: str, conversation_id: any, sip_call_id: any} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint POST /v1/convai/mcp-servers\n@desc Create Mcp Server\n@required {config: map{approval_policy: str, tool_approval_hashes: [map], transport: str, url!: any, secret_token: any, request_headers: map, auth_connection: any, name!: str, description: str, force_pre_tool_speech: bool, disable_interruptions: bool, tool_call_sound: any, tool_call_sound_behavior: str, execution_mode: str, tool_config_overrides: [map], disable_compression: bool}}\n@returns(200) {id: str, config: map{approval_policy: str, tool_approval_hashes: [map], transport: str, url: any, secret_token: any, request_headers: map, auth_connection: any, name: str, description: str, force_pre_tool_speech: bool, disable_interruptions: bool, tool_call_sound: any, tool_call_sound_behavior: str, execution_mode: str, tool_config_overrides: [map], disable_compression: bool}, access_info: any, dependent_agents: [any], metadata: map{created_at: int, owner_user_id: any}} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint GET /v1/convai/mcp-servers\n@desc List Mcp Servers\n@returns(200) {mcp_servers: [map]} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint GET /v1/convai/mcp-servers/{mcp_server_id}\n@desc Get Mcp Server\n@required {mcp_server_id: str # ID of the MCP Server.}\n@returns(200) {id: str, config: map{approval_policy: str, tool_approval_hashes: [map], transport: str, url: any, secret_token: any, request_headers: map, auth_connection: any, name: str, description: str, force_pre_tool_speech: bool, disable_interruptions: bool, tool_call_sound: any, tool_call_sound_behavior: str, execution_mode: str, tool_config_overrides: [map], disable_compression: bool}, access_info: any, dependent_agents: [any], metadata: map{created_at: int, owner_user_id: any}} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint DELETE /v1/convai/mcp-servers/{mcp_server_id}\n@desc Delete Mcp Server\n@required {mcp_server_id: str # ID of the MCP Server.}\n@returns(200) Successful Response\n@errors {422: Validation Error}\n\n@endpoint PATCH /v1/convai/mcp-servers/{mcp_server_id}\n@desc Update Mcp Server Configuration\n@required {mcp_server_id: str # ID of the MCP Server.}\n@optional {approval_policy: any # The approval mode to set for the MCP server, force_pre_tool_speech: any # If set, overrides the server's force_pre_tool_speech setting for this tool, disable_interruptions: any # If set, overrides the server's disable_interruptions setting for this tool, tool_call_sound: any # Predefined tool call sound type to play during tool execution for all tools from this MCP server, tool_call_sound_behavior: any # Determines when the tool call sound should play for all tools from this MCP server, execution_mode: any # If set, overrides the server's execution_mode setting for this tool, request_headers: any # The headers to include in requests to the MCP server, disable_compression: any # Whether to disable HTTP compression for this MCP server, secret_token: any # Optional secret token for authentication with this MCP server, auth_connection: any # Optional auth connection to use for authentication with this MCP server}\n@returns(200) {id: str, config: map{approval_policy: str, tool_approval_hashes: [map], transport: str, url: any, secret_token: any, request_headers: map, auth_connection: any, name: str, description: str, force_pre_tool_speech: bool, disable_interruptions: bool, tool_call_sound: any, tool_call_sound_behavior: str, execution_mode: str, tool_config_overrides: [map], disable_compression: bool}, access_info: any, dependent_agents: [any], metadata: map{created_at: int, owner_user_id: any}} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint GET /v1/convai/mcp-servers/{mcp_server_id}/tools\n@desc List Mcp Server Tools\n@required {mcp_server_id: str # ID of the MCP Server.}\n@returns(200) {success: bool, tools: [map], error_message: any} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint PATCH /v1/convai/mcp-servers/{mcp_server_id}/approval-policy\n@desc Update Mcp Server Approval Policy\n@required {mcp_server_id: str # ID of the MCP Server., approval_policy: str(auto_approve_all/require_approval_all/require_approval_per_tool)=require_approval_all # Defines the MCP server-level approval policy for tool execution.}\n@returns(200) {id: str, config: map{approval_policy: str, tool_approval_hashes: [map], transport: str, url: any, secret_token: any, request_headers: map, auth_connection: any, name: str, description: str, force_pre_tool_speech: bool, disable_interruptions: bool, tool_call_sound: any, tool_call_sound_behavior: str, execution_mode: str, tool_config_overrides: [map], disable_compression: bool}, access_info: any, dependent_agents: [any], metadata: map{created_at: int, owner_user_id: any}} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint POST /v1/convai/mcp-servers/{mcp_server_id}/tool-approvals\n@desc Create Mcp Server Tool Approval\n@required {mcp_server_id: str # ID of the MCP Server., tool_name: str # The name of the MCP tool, tool_description: str # The description of the MCP tool}\n@optional {input_schema: map # The input schema of the MCP tool (the schema defined on the MCP server before ElevenLabs does any extra processing), approval_policy: str(auto_approved/requires_approval)=requires_approval # Defines the tool-level approval policy.}\n@returns(200) {id: str, config: map{approval_policy: str, tool_approval_hashes: [map], transport: str, url: any, secret_token: any, request_headers: map, auth_connection: any, name: str, description: str, force_pre_tool_speech: bool, disable_interruptions: bool, tool_call_sound: any, tool_call_sound_behavior: str, execution_mode: str, tool_config_overrides: [map], disable_compression: bool}, access_info: any, dependent_agents: [any], metadata: map{created_at: int, owner_user_id: any}} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint DELETE /v1/convai/mcp-servers/{mcp_server_id}/tool-approvals/{tool_name}\n@desc Delete Mcp Server Tool Approval\n@required {mcp_server_id: str # ID of the MCP Server., tool_name: str # Name of the MCP tool to remove approval for.}\n@returns(200) {id: str, config: map{approval_policy: str, tool_approval_hashes: [map], transport: str, url: any, secret_token: any, request_headers: map, auth_connection: any, name: str, description: str, force_pre_tool_speech: bool, disable_interruptions: bool, tool_call_sound: any, tool_call_sound_behavior: str, execution_mode: str, tool_config_overrides: [map], disable_compression: bool}, access_info: any, dependent_agents: [any], metadata: map{created_at: int, owner_user_id: any}} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint POST /v1/convai/mcp-servers/{mcp_server_id}/tool-configs\n@desc Create Mcp Tool Configuration Override\n@required {mcp_server_id: str # ID of the MCP Server., tool_name: str # The name of the MCP tool}\n@optional {force_pre_tool_speech: any # If set, overrides the server's force_pre_tool_speech setting for this tool, disable_interruptions: any # If set, overrides the server's disable_interruptions setting for this tool, tool_call_sound: any # If set, overrides the server's tool_call_sound setting for this tool, tool_call_sound_behavior: any # If set, overrides the server's tool_call_sound_behavior setting for this tool, execution_mode: any # If set, overrides the server's execution_mode setting for this tool, assignments: any # Dynamic variable assignments for this MCP tool, input_overrides: any # Mapping of json path to input override configuration}\n@returns(200) {id: str, config: map{approval_policy: str, tool_approval_hashes: [map], transport: str, url: any, secret_token: any, request_headers: map, auth_connection: any, name: str, description: str, force_pre_tool_speech: bool, disable_interruptions: bool, tool_call_sound: any, tool_call_sound_behavior: str, execution_mode: str, tool_config_overrides: [map], disable_compression: bool}, access_info: any, dependent_agents: [any], metadata: map{created_at: int, owner_user_id: any}} # Successful Response\n@errors {409: Tool config override already exists, 422: Validation Error}\n\n@endpoint GET /v1/convai/mcp-servers/{mcp_server_id}/tool-configs/{tool_name}\n@desc Get Mcp Tool Configuration Override\n@required {mcp_server_id: str # ID of the MCP Server., tool_name: str # Name of the MCP tool to retrieve config overrides for.}\n@returns(200) {tool_name: str, force_pre_tool_speech: any, disable_interruptions: any, tool_call_sound: any, tool_call_sound_behavior: any, execution_mode: any, assignments: [map], input_overrides: any} # Successful Response\n@errors {404: Tool config override not found, 422: Validation Error}\n\n@endpoint PATCH /v1/convai/mcp-servers/{mcp_server_id}/tool-configs/{tool_name}\n@desc Update Mcp Tool Configuration Override\n@required {mcp_server_id: str # ID of the MCP Server., tool_name: str # Name of the MCP tool to update config overrides for.}\n@optional {force_pre_tool_speech: any # If set, overrides the server's force_pre_tool_speech setting for this tool, disable_interruptions: any # If set, overrides the server's disable_interruptions setting for this tool, tool_call_sound: any # If set, overrides the server's tool_call_sound setting for this tool, tool_call_sound_behavior: any # If set, overrides the server's tool_call_sound_behavior setting for this tool, execution_mode: any # If set, overrides the server's execution_mode setting for this tool, assignments: any # Dynamic variable assignments for this MCP tool, input_overrides: any # Mapping of json path to input override configuration}\n@returns(200) {id: str, config: map{approval_policy: str, tool_approval_hashes: [map], transport: str, url: any, secret_token: any, request_headers: map, auth_connection: any, name: str, description: str, force_pre_tool_speech: bool, disable_interruptions: bool, tool_call_sound: any, tool_call_sound_behavior: str, execution_mode: str, tool_config_overrides: [map], disable_compression: bool}, access_info: any, dependent_agents: [any], metadata: map{created_at: int, owner_user_id: any}} # Successful Response\n@errors {404: Tool config override not found, 422: Validation Error}\n\n@endpoint DELETE /v1/convai/mcp-servers/{mcp_server_id}/tool-configs/{tool_name}\n@desc Delete Mcp Tool Configuration Override\n@required {mcp_server_id: str # ID of the MCP Server., tool_name: str # Name of the MCP tool to remove config overrides for.}\n@returns(200) {id: str, config: map{approval_policy: str, tool_approval_hashes: [map], transport: str, url: any, secret_token: any, request_headers: map, auth_connection: any, name: str, description: str, force_pre_tool_speech: bool, disable_interruptions: bool, tool_call_sound: any, tool_call_sound_behavior: str, execution_mode: str, tool_config_overrides: [map], disable_compression: bool}, access_info: any, dependent_agents: [any], metadata: map{created_at: int, owner_user_id: any}} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint GET /v1/convai/whatsapp-accounts/{phone_number_id}\n@desc Get Whatsapp Account\n@required {phone_number_id: str}\n@returns(200) {business_account_id: str, phone_number_id: str, business_account_name: str, phone_number_name: str, phone_number: str, assigned_agent_id: any, enable_messaging: bool, enable_audio_message_response: bool, assigned_agent_name: any} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint PATCH /v1/convai/whatsapp-accounts/{phone_number_id}\n@desc Update Whatsapp Account\n@required {phone_number_id: str}\n@optional {assigned_agent_id: any, enable_messaging: any, enable_audio_message_response: any}\n@returns(200) Successful Response\n@errors {422: Validation Error}\n\n@endpoint DELETE /v1/convai/whatsapp-accounts/{phone_number_id}\n@desc Delete Whatsapp Account\n@required {phone_number_id: str}\n@returns(200) Successful Response\n@errors {422: Validation Error}\n\n@endpoint GET /v1/convai/whatsapp-accounts\n@desc List Whatsapp Accounts\n@returns(200) {items: [map]} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint POST /v1/convai/agents/{agent_id}/branches\n@desc Create A New Branch\n@required {agent_id: str # The id of an agent. This is returned on agent creation., parent_version_id: str # ID of the version to branch from, name: str # Name of the branch. It is unique within the agent., description: str # Description for the branch}\n@optional {conversation_config: any # Changes to apply to conversation config, platform_settings: any # Changes to apply to platform settings, workflow: any # Updated workflow definition}\n@returns(200) {created_branch_id: str, created_version_id: str} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint GET /v1/convai/agents/{agent_id}/branches\n@desc List Agent Branches\n@required {agent_id: str # The id of an agent. This is returned on agent creation.}\n@optional {include_archived: bool=false # Whether archived branches should be included, limit: int=100 # How many results at most should be returned}\n@returns(200) {meta: map{total: any, page: any, page_size: any}, results: [map]} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint GET /v1/convai/agents/{agent_id}/branches/{branch_id}\n@desc Get Agent Branch\n@required {agent_id: str # The id of an agent. This is returned on agent creation., branch_id: str # Unique identifier for the branch.}\n@returns(200) {id: str, name: str, agent_id: str, description: str, created_at: int, last_committed_at: int, is_archived: bool, protection_status: str, access_info: any, current_live_percentage: num, parent_branch: any, most_recent_versions: [map]} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint PATCH /v1/convai/agents/{agent_id}/branches/{branch_id}\n@desc Update Agent Branch\n@required {agent_id: str # The id of an agent. This is returned on agent creation., branch_id: str # Unique identifier for the branch.}\n@optional {name: any # New name for the branch. Must be unique within the agent., is_archived: any # Whether the branch should be archived, protection_status: any # The protection level for the branch}\n@returns(200) {id: str, name: str, agent_id: str, description: str, created_at: int, last_committed_at: int, is_archived: bool, protection_status: str, access_info: any, current_live_percentage: num, parent_branch: any, most_recent_versions: [map]} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint POST /v1/convai/agents/{agent_id}/branches/{source_branch_id}/merge\n@desc Merge A Branch Into A Target Branch\n@required {agent_id: str # The id of an agent. This is returned on agent creation., source_branch_id: str # Unique identifier for the source branch to merge from., target_branch_id: str # The ID of the target branch to merge into (must be the main branch).}\n@optional {archive_source_branch: bool=true # Whether to archive the source branch after merging}\n@returns(200) Successful Response\n@errors {422: Validation Error}\n\n@endpoint POST /v1/convai/agents/{agent_id}/deployments\n@desc Create Or Update Deployments\n@required {agent_id: str # The id of an agent. This is returned on agent creation., deployment_request: map{requests!: [map]}}\n@returns(200) {traffic_percentage_branch_id_map: map} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint POST /v1/convai/agents/{agent_id}/drafts\n@desc Create Agent Draft\n@required {agent_id: str # The id of an agent. This is returned on agent creation., branch_id: str # The ID of the agent branch to use, conversation_config: map # Conversation config for the draft, platform_settings: map # Platform settings for the draft, workflow: map{edges: map, nodes: map, prevent_subagent_loops: bool}, name: str # Name for the draft}\n@optional {tags: any # Tags to help classify and filter the agent}\n@returns(200) Successful Response\n@errors {422: Validation Error}\n\n@endpoint DELETE /v1/convai/agents/{agent_id}/drafts\n@desc Delete Agent Draft\n@required {agent_id: str # The id of an agent. This is returned on agent creation., branch_id: str # The ID of the agent branch to use}\n@returns(200) Successful Response\n@errors {422: Validation Error}\n\n@endpoint POST /v1/convai/environment-variables\n@desc Create Environment Variable\n@returns(200) {label: str, created_at_unix_secs: int, updated_at_unix_secs: int, created_by_user_id: any, type: str, id: str, workspace_id: str, values: any} # Successful Response\n@errors {400: Invalid parameters, 409: Environment variable with this label already exists, 422: Validation Error}\n\n@endpoint GET /v1/convai/environment-variables\n@desc List Environment Variables\n@optional {cursor: any # Pagination cursor from previous response, page_size: int=100 # Number of items to return (1-100), label: any # Filter by exact label match, environment: any # Filter to only return variables that have this environment. When specified, the values dict in the response will only contain this environment., type: any # Filter by variable type}\n@returns(200) {environment_variables: [map], next_cursor: any, has_more: bool} # Successful Response\n@errors {400: Invalid environment filter, 422: Validation Error}\n\n@endpoint GET /v1/convai/environment-variables/{env_var_id}\n@desc Get Environment Variable\n@required {env_var_id: str}\n@returns(200) {label: str, created_at_unix_secs: int, updated_at_unix_secs: int, created_by_user_id: any, type: str, id: str, workspace_id: str, values: any} # Successful Response\n@errors {404: Environment variable not found, 422: Validation Error}\n\n@endpoint PATCH /v1/convai/environment-variables/{env_var_id}\n@desc Update Environment Variable\n@required {env_var_id: str, values: map # Values to replace. Set to null to remove an environment (except 'production').}\n@returns(200) {label: str, created_at_unix_secs: int, updated_at_unix_secs: int, created_by_user_id: any, type: str, id: str, workspace_id: str, values: any} # Successful Response\n@errors {400: Invalid parameters or type mismatch, 404: Environment variable not found, 422: Validation Error}\n\n@endgroup\n\n@group music\n@endpoint POST /v1/music/plan\n@desc Generate Composition Plan\n@required {prompt: str # A simple text prompt to compose a plan from.}\n@optional {music_length_ms: any # The length of the composition plan to generate in milliseconds. Must be between 3000ms and 600000ms. Optional - if not provided, the model will choose a length based on the prompt., source_composition_plan: any # An optional composition plan to use as a source for the new composition plan., model_id: str=music_v1 # The model to use for the generation.}\n@returns(200) {positive_global_styles: [str], negative_global_styles: [str], sections: [map]} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint POST /v1/music\n@desc Compose Music\n@optional {output_format: str(mp3_22050_32/mp3_24000_48/mp3_44100_32/mp3_44100_64/mp3_44100_96/mp3_44100_128/mp3_44100_192/pcm_8000/pcm_16000/pcm_22050/pcm_24000/pcm_32000/pcm_44100/pcm_48000/ulaw_8000/alaw_8000/opus_48000_32/opus_48000_64/opus_48000_96/opus_48000_128/opus_48000_192)=mp3_44100_128 # Output format of the generated audio. Formatted as codec_sample_rate_bitrate. So an mp3 with 22.05kHz sample rate at 32kbs is represented as mp3_22050_32. MP3 with 192kbps bitrate requires you to be subscribed to Creator tier or above. PCM with 44.1kHz sample rate requires you to be subscribed to Pro tier or above. Note that the μ-law format (sometimes written mu-law, often approximated as u-law) is commonly used for Twilio audio inputs., prompt: any # A simple text prompt to generate a song from. Cannot be used in conjunction with `composition_plan`., music_prompt: any # A music prompt. Deprecated. Use `composition_plan` instead., composition_plan: any # A detailed composition plan to guide music generation. Cannot be used in conjunction with `prompt`., music_length_ms: any # The length of the song to generate in milliseconds. Used only in conjunction with `prompt`. Must be between 3000ms and 600000ms. Optional - if not provided, the model will choose a length based on the prompt., model_id: str=music_v1 # The model to use for the generation., seed: any # Random seed to initialize the music generation process. Providing the same seed with the same parameters can help achieve more consistent results, but exact reproducibility is not guaranteed and outputs may change across system updates. Cannot be used in conjunction with prompt., force_instrumental: bool=false # If true, guarantees that the generated song will be instrumental. If false, the song may or may not be instrumental depending on the `prompt`. Can only be used with `prompt`., finetune_id: any # The ID of the finetune to use for the generation, use_phonetic_names: bool=false # If true, proper names in the prompt will be phonetically spelled in the lyrics for better pronunciation by the music model. The original names will be restored in word timestamps., respect_sections_durations: bool=true # Controls how strictly section durations in the `composition_plan` are enforced. Only used with `composition_plan`. When set to true, the model will precisely respect each section's `duration_ms` from the plan. When set to false, the model may adjust individual section durations which will generally lead to better generation quality and improved latency, while always preserving the total song duration from the plan., store_for_inpainting: bool=false # Whether to store the generated song for inpainting. Only available to enterprise clients with access to the inpainting feature., sign_with_c2pa: bool=false # Whether to sign the generated song with C2PA. Applicable only for mp3 files.}\n@returns(200) The generated audio file in the format specified\n@errors {422: Validation Error}\n\n@endpoint POST /v1/music/detailed\n@desc Compose Music With A Detailed Response\n@optional {output_format: str(mp3_22050_32/mp3_24000_48/mp3_44100_32/mp3_44100_64/mp3_44100_96/mp3_44100_128/mp3_44100_192/pcm_8000/pcm_16000/pcm_22050/pcm_24000/pcm_32000/pcm_44100/pcm_48000/ulaw_8000/alaw_8000/opus_48000_32/opus_48000_64/opus_48000_96/opus_48000_128/opus_48000_192)=mp3_44100_128 # Output format of the generated audio. Formatted as codec_sample_rate_bitrate. So an mp3 with 22.05kHz sample rate at 32kbs is represented as mp3_22050_32. MP3 with 192kbps bitrate requires you to be subscribed to Creator tier or above. PCM with 44.1kHz sample rate requires you to be subscribed to Pro tier or above. Note that the μ-law format (sometimes written mu-law, often approximated as u-law) is commonly used for Twilio audio inputs., prompt: any # A simple text prompt to generate a song from. Cannot be used in conjunction with `composition_plan`., music_prompt: any # A music prompt. Deprecated. Use `composition_plan` instead., composition_plan: any # A detailed composition plan to guide music generation. Cannot be used in conjunction with `prompt`., music_length_ms: any # The length of the song to generate in milliseconds. Used only in conjunction with `prompt`. Must be between 3000ms and 600000ms. Optional - if not provided, the model will choose a length based on the prompt., model_id: str=music_v1 # The model to use for the generation., seed: any # Random seed to initialize the music generation process. Providing the same seed with the same parameters can help achieve more consistent results, but exact reproducibility is not guaranteed and outputs may change across system updates. Cannot be used in conjunction with prompt., force_instrumental: bool=false # If true, guarantees that the generated song will be instrumental. If false, the song may or may not be instrumental depending on the `prompt`. Can only be used with `prompt`., finetune_id: any # The ID of the finetune to use for the generation, use_phonetic_names: bool=false # If true, proper names in the prompt will be phonetically spelled in the lyrics for better pronunciation by the music model. The original names will be restored in word timestamps., respect_sections_durations: bool=true # Controls how strictly section durations in the `composition_plan` are enforced. Only used with `composition_plan`. When set to true, the model will precisely respect each section's `duration_ms` from the plan. When set to false, the model may adjust individual section durations which will generally lead to better generation quality and improved latency, while always preserving the total song duration from the plan., store_for_inpainting: bool=false # Whether to store the generated song for inpainting. Only available to enterprise clients with access to the inpainting feature., with_timestamps: bool=false # Whether to return the timestamps of the words in the generated song., sign_with_c2pa: bool=false # Whether to sign the generated song with C2PA. Applicable only for mp3 files.}\n@returns(200) Multipart/mixed response with JSON metadata and binary audio file\n@errors {422: Validation Error}\n\n@endpoint POST /v1/music/stream\n@desc Stream Composed Music\n@optional {output_format: str(mp3_22050_32/mp3_24000_48/mp3_44100_32/mp3_44100_64/mp3_44100_96/mp3_44100_128/mp3_44100_192/pcm_8000/pcm_16000/pcm_22050/pcm_24000/pcm_32000/pcm_44100/pcm_48000/ulaw_8000/alaw_8000/opus_48000_32/opus_48000_64/opus_48000_96/opus_48000_128/opus_48000_192)=mp3_44100_128 # Output format of the generated audio. Formatted as codec_sample_rate_bitrate. So an mp3 with 22.05kHz sample rate at 32kbs is represented as mp3_22050_32. MP3 with 192kbps bitrate requires you to be subscribed to Creator tier or above. PCM with 44.1kHz sample rate requires you to be subscribed to Pro tier or above. Note that the μ-law format (sometimes written mu-law, often approximated as u-law) is commonly used for Twilio audio inputs., prompt: any # A simple text prompt to generate a song from. Cannot be used in conjunction with `composition_plan`., music_prompt: any # A music prompt. Deprecated. Use `composition_plan` instead., composition_plan: any # A detailed composition plan to guide music generation. Cannot be used in conjunction with `prompt`., music_length_ms: any # The length of the song to generate in milliseconds. Used only in conjunction with `prompt`. Must be between 3000ms and 600000ms. Optional - if not provided, the model will choose a length based on the prompt., model_id: str=music_v1 # The model to use for the generation., seed: any # Random seed to initialize the music generation process. Providing the same seed with the same parameters can help achieve more consistent results, but exact reproducibility is not guaranteed and outputs may change across system updates. Cannot be used in conjunction with prompt., force_instrumental: bool=false # If true, guarantees that the generated song will be instrumental. If false, the song may or may not be instrumental depending on the `prompt`. Can only be used with `prompt`., finetune_id: any # The ID of the finetune to use for the generation, use_phonetic_names: bool=false # If true, proper names in the prompt will be phonetically spelled in the lyrics for better pronunciation by the music model. The original names will be restored in word timestamps., store_for_inpainting: bool=false # Whether to store the generated song for inpainting. Only available to enterprise clients with access to the inpainting feature.}\n@returns(200) Streaming audio data in the format specified\n@errors {422: Validation Error}\n\n@endpoint POST /v1/music/upload\n@desc Upload Music\n@returns(200) {song_id: str, composition_plan: any} # Successfully uploaded music file with optional composition plan\n@errors {422: Validation Error}\n\n@endpoint POST /v1/music/stem-separation\n@desc Stem Separation\n@optional {output_format: str(mp3_22050_32/mp3_24000_48/mp3_44100_32/mp3_44100_64/mp3_44100_96/mp3_44100_128/mp3_44100_192/pcm_8000/pcm_16000/pcm_22050/pcm_24000/pcm_32000/pcm_44100/pcm_48000/ulaw_8000/alaw_8000/opus_48000_32/opus_48000_64/opus_48000_96/opus_48000_128/opus_48000_192)=mp3_44100_128 # Output format of the generated audio. Formatted as codec_sample_rate_bitrate. So an mp3 with 22.05kHz sample rate at 32kbs is represented as mp3_22050_32. MP3 with 192kbps bitrate requires you to be subscribed to Creator tier or above. PCM with 44.1kHz sample rate requires you to be subscribed to Pro tier or above. Note that the μ-law format (sometimes written mu-law, often approximated as u-law) is commonly used for Twilio audio inputs.}\n@returns(200) ZIP archive containing separated audio stems. Each stem is provided as a separate audio file in the requested output format.\n@errors {422: Validation Error}\n\n@endgroup\n\n@group voices\n@endpoint POST /v1/voices/pvc\n@desc Create Pvc Voice\n@required {name: str # The name that identifies this voice. This will be displayed in the dropdown of the website., language: str # Language used in the samples.}\n@optional {description: any # Description to use for the created voice., labels: any # Labels for the voice. Keys can be language, accent, gender, or age.}\n@returns(200) {voice_id: str} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint POST /v1/voices/pvc/{voice_id}\n@desc Edit Pvc Voice\n@required {voice_id: str # Voice ID to be used, you can use https://api.elevenlabs.io/v1/voices to list all the available voices.}\n@optional {name: str # The name that identifies this voice. This will be displayed in the dropdown of the website., language: str # Language used in the samples., description: any # Description to use for the created voice., labels: any # Labels for the voice. Keys can be language, accent, gender, or age.}\n@returns(200) {voice_id: str} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint POST /v1/voices/pvc/{voice_id}/samples\n@desc Add Samples To Pvc Voice\n@required {voice_id: str # Voice ID to be used, you can use https://api.elevenlabs.io/v1/voices to list all the available voices.}\n@returns(200) Successful Response\n@errors {422: Validation Error}\n\n@endpoint POST /v1/voices/pvc/{voice_id}/samples/{sample_id}\n@desc Update Pvc Voice Sample\n@required {voice_id: str # Voice ID to be used, you can use https://api.elevenlabs.io/v1/voices to list all the available voices., sample_id: str # Sample ID to be used}\n@optional {remove_background_noise: bool=false # If set will remove background noise for voice samples using our audio isolation model. If the samples do not include background noise, it can make the quality worse., selected_speaker_ids: any # Speaker IDs to be used for PVC training. Make sure you send all the speaker IDs you want to use for PVC training in one request because the last request will override the previous ones., trim_start_time: any # The start time of the audio to be used for PVC training. Time should be in milliseconds, trim_end_time: any # The end time of the audio to be used for PVC training. Time should be in milliseconds, file_name: any # The name of the audio file to be used for PVC training.}\n@returns(200) {voice_id: str} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint DELETE /v1/voices/pvc/{voice_id}/samples/{sample_id}\n@desc Delete Pvc Voice Sample\n@required {voice_id: str # Voice ID to be used, you can use https://api.elevenlabs.io/v1/voices to list all the available voices., sample_id: str # Sample ID to be used}\n@returns(200) {status: str} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint GET /v1/voices/pvc/{voice_id}/samples/{sample_id}/audio\n@desc Retrieve Voice Sample Audio\n@required {voice_id: str # Voice ID to be used, you can use https://api.elevenlabs.io/v1/voices to list all the available voices., sample_id: str # Sample ID to be used}\n@optional {remove_background_noise: bool=false # If set will remove background noise for voice samples using our audio isolation model. If the samples do not include background noise, it can make the quality worse.}\n@returns(200) {audio_base_64: str, voice_id: str, sample_id: str, media_type: str, duration_secs: any} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint GET /v1/voices/pvc/{voice_id}/samples/{sample_id}/waveform\n@desc Retrieve Voice Sample Visual Waveform\n@required {voice_id: str # Voice ID to be used, you can use https://api.elevenlabs.io/v1/voices to list all the available voices., sample_id: str # Sample ID to be used}\n@returns(200) {sample_id: str, visual_waveform: [num]} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint GET /v1/voices/pvc/{voice_id}/samples/{sample_id}/speakers\n@desc Retrieve Speaker Separation Status\n@required {voice_id: str # Voice ID to be used, you can use https://api.elevenlabs.io/v1/voices to list all the available voices., sample_id: str # Sample ID to be used}\n@returns(200) {voice_id: str, sample_id: str, status: str, speakers: any, selected_speaker_ids: any} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint POST /v1/voices/pvc/{voice_id}/samples/{sample_id}/separate-speakers\n@desc Start Speaker Separation\n@required {voice_id: str # Voice ID to be used, you can use https://api.elevenlabs.io/v1/voices to list all the available voices., sample_id: str # Sample ID to be used}\n@returns(200) {status: str} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint GET /v1/voices/pvc/{voice_id}/samples/{sample_id}/speakers/{speaker_id}/audio\n@desc Retrieve Separated Speaker Audio\n@required {voice_id: str # Voice ID to be used, you can use https://api.elevenlabs.io/v1/voices to list all the available voices., sample_id: str # Sample ID to be used, speaker_id: str # Speaker ID to be used, you can use GET https://api.elevenlabs.io/v1/voices/{voice_id}/samples/{sample_id}/speakers to list all the available speakers for a sample.}\n@returns(200) {audio_base_64: str, media_type: str, duration_secs: num} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint GET /v1/voices/pvc/{voice_id}/captcha\n@desc Get Pvc Voice Captcha\n@required {voice_id: str # Voice ID to be used, you can use https://api.elevenlabs.io/v1/voices to list all the available voices.}\n@returns(200) Successful Response\n@errors {422: Validation Error}\n\n@endpoint POST /v1/voices/pvc/{voice_id}/captcha\n@desc Verify Pvc Voice Captcha\n@required {voice_id: str # Voice ID to be used, you can use https://api.elevenlabs.io/v1/voices to list all the available voices.}\n@returns(200) {status: str} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint POST /v1/voices/pvc/{voice_id}/train\n@desc Run Pvc Training\n@required {voice_id: str # Voice ID to be used, you can use https://api.elevenlabs.io/v1/voices to list all the available voices.}\n@optional {model_id: any # The model ID to use for the conversion.}\n@returns(200) {status: str} # Successful Response\n@errors {422: Validation Error}\n\n@endpoint POST /v1/voices/pvc/{voice_id}/verification\n@desc Request Manual Verification\n@required {voice_id: str # Voice ID to be used, you can use https://api.elevenlabs.io/v1/voices to list all the available voices.}\n@returns(200) {status: str} # Successful Response\n@errors {422: Validation Error}\n\n@endgroup\n\n@group docs\n@endpoint GET /docs\n@desc Redirect To Mintlify\n@returns(200) Successful Response\n\n@endgroup\n\n@end\n"}}