{"files":{"SKILL.md":"---\nname: langfuse\ndescription: \"langfuse API skill. Use when working with langfuse for api. Covers 87 endpoints.\"\nversion: 1.0.0\ngenerator: lapsh\n---\n\n# langfuse\n\n## Auth\nBearer basic\n\n## Base URL\nNot specified.\n\n## Setup\n1. Set Authorization header with Bearer token\n2. GET /api/public/annotation-queues -- get all annotation queues\n3. POST /api/public/annotation-queues -- create first annotation-queue\n\n## Endpoints\n87 endpoints across 1 group. See references/api-spec.lap for full details.\n\n### Api\n| Method | Path | Description |\n|--------|------|-------------|\n| GET | /api/public/annotation-queues | Get all annotation queues |\n| POST | /api/public/annotation-queues | Create an annotation queue |\n| GET | /api/public/annotation-queues/{queueId} | Get an annotation queue by ID |\n| GET | /api/public/annotation-queues/{queueId}/items | Get items for a specific annotation queue |\n| POST | /api/public/annotation-queues/{queueId}/items | Add an item to an annotation queue |\n| GET | /api/public/annotation-queues/{queueId}/items/{itemId} | Get a specific item from an annotation queue |\n| PATCH | /api/public/annotation-queues/{queueId}/items/{itemId} | Update an annotation queue item |\n| DELETE | /api/public/annotation-queues/{queueId}/items/{itemId} | Remove an item from an annotation queue |\n| POST | /api/public/annotation-queues/{queueId}/assignments | Create an assignment for a user to an annotation queue |\n| DELETE | /api/public/annotation-queues/{queueId}/assignments | Delete an assignment for a user to an annotation queue |\n| GET | /api/public/integrations/blob-storage | Get all blob storage integrations for the organization (requires organization-scoped API key) |\n| PUT | /api/public/integrations/blob-storage | Create or update a blob storage integration for a specific project (requires organization-scoped API key). The configuration is validated by performing a test upload to the bucket. |\n| GET | /api/public/integrations/blob-storage/{id} | Get the sync status of a blob storage integration by integration ID (requires organization-scoped API key) |\n| DELETE | /api/public/integrations/blob-storage/{id} | Delete a blob storage integration by ID (requires organization-scoped API key) |\n| POST | /api/public/comments | Create a comment. Comments may be attached to different object types (trace, observation, session, prompt). |\n| GET | /api/public/comments | Get all comments |\n| GET | /api/public/comments/{commentId} | Get a comment by id |\n| POST | /api/public/dataset-items | Create a dataset item |\n| GET | /api/public/dataset-items | Get dataset items. Optionally specify a version to get the items as they existed at that point in time. |\n| GET | /api/public/dataset-items/{id} | Get a dataset item |\n| DELETE | /api/public/dataset-items/{id} | Delete a dataset item and all its run items. This action is irreversible. |\n| POST | /api/public/dataset-run-items | Create a dataset run item |\n| GET | /api/public/dataset-run-items | List dataset run items |\n| GET | /api/public/v2/datasets | Get all datasets |\n| POST | /api/public/v2/datasets | Create a dataset |\n| GET | /api/public/v2/datasets/{datasetName} | Get a dataset |\n| GET | /api/public/datasets/{datasetName}/runs/{runName} | Get a dataset run and its items |\n| DELETE | /api/public/datasets/{datasetName}/runs/{runName} | Delete a dataset run and all its run items. This action is irreversible. |\n| GET | /api/public/datasets/{datasetName}/runs | Get dataset runs |\n| GET | /api/public/health | Check health of API and database |\n| POST | /api/public/ingestion | **Legacy endpoint for batch ingestion for Langfuse Observability.** |\n| GET | /api/public/metrics | Get metrics from the Langfuse project using a query object. |\n| GET | /api/public/observations/{observationId} | Get a observation |\n| GET | /api/public/observations | Get a list of observations. |\n| POST | /api/public/scores | Create a score (supports both trace and session scores) |\n| DELETE | /api/public/scores/{scoreId} | Delete a score (supports both trace and session scores) |\n| GET | /api/public/llm-connections | Get all LLM connections in a project |\n| PUT | /api/public/llm-connections | Create or update an LLM connection. The connection is upserted on provider. |\n| GET | /api/public/media/{mediaId} | Get a media record |\n| PATCH | /api/public/media/{mediaId} | Patch a media record |\n| POST | /api/public/media | Get a presigned upload URL for a media record |\n| GET | /api/public/v2/metrics | Get metrics from the Langfuse project using a query object. V2 endpoint with optimized performance. |\n| POST | /api/public/models | Create a model |\n| GET | /api/public/models | Get all models |\n| GET | /api/public/models/{id} | Get a model |\n| DELETE | /api/public/models/{id} | Delete a model. Cannot delete models managed by Langfuse. You can create your own definition with the same modelName to override the definition though. |\n| GET | /api/public/v2/observations | Get a list of observations with cursor-based pagination and flexible field selection. |\n| POST | /api/public/otel/v1/traces | **OpenTelemetry Traces Ingestion Endpoint** |\n| GET | /api/public/organizations/memberships | Get all memberships for the organization associated with the API key (requires organization-scoped API key) |\n| PUT | /api/public/organizations/memberships | Create or update a membership for the organization associated with the API key (requires organization-scoped API key) |\n| DELETE | /api/public/organizations/memberships | Delete a membership from the organization associated with the API key (requires organization-scoped API key) |\n| GET | /api/public/projects/{projectId}/memberships | Get all memberships for a specific project (requires organization-scoped API key) |\n| PUT | /api/public/projects/{projectId}/memberships | Create or update a membership for a specific project (requires organization-scoped API key). The user must already be a member of the organization. |\n| DELETE | /api/public/projects/{projectId}/memberships | Delete a membership from a specific project (requires organization-scoped API key). The user must be a member of the organization. |\n| GET | /api/public/organizations/projects | Get all projects for the organization associated with the API key (requires organization-scoped API key) |\n| GET | /api/public/organizations/apiKeys | Get all API keys for the organization associated with the API key (requires organization-scoped API key) |\n| GET | /api/public/projects | Get Project associated with API key (requires project-scoped API key). You can use GET /api/public/organizations/projects to get all projects with an organization-scoped key. |\n| POST | /api/public/projects | Create a new project (requires organization-scoped API key) |\n| PUT | /api/public/projects/{projectId} | Update a project by ID (requires organization-scoped API key). |\n| DELETE | /api/public/projects/{projectId} | Delete a project by ID (requires organization-scoped API key). Project deletion is processed asynchronously. |\n| GET | /api/public/projects/{projectId}/apiKeys | Get all API keys for a project (requires organization-scoped API key) |\n| POST | /api/public/projects/{projectId}/apiKeys | Create a new API key for a project (requires organization-scoped API key) |\n| DELETE | /api/public/projects/{projectId}/apiKeys/{apiKeyId} | Delete an API key for a project (requires organization-scoped API key) |\n| PATCH | /api/public/v2/prompts/{name}/versions/{version} | Update labels for a specific prompt version |\n| GET | /api/public/v2/prompts/{promptName} | Get a prompt |\n| DELETE | /api/public/v2/prompts/{promptName} | Delete prompt versions. If neither version nor label is specified, all versions of the prompt are deleted. |\n| GET | /api/public/v2/prompts | Get a list of prompt names with versions and labels |\n| POST | /api/public/v2/prompts | Create a new version for the prompt with the given `name` |\n| GET | /api/public/scim/ServiceProviderConfig | Get SCIM Service Provider Configuration (requires organization-scoped API key) |\n| GET | /api/public/scim/ResourceTypes | Get SCIM Resource Types (requires organization-scoped API key) |\n| GET | /api/public/scim/Schemas | Get SCIM Schemas (requires organization-scoped API key) |\n| GET | /api/public/scim/Users | List users in the organization (requires organization-scoped API key) |\n| POST | /api/public/scim/Users | Create a new user in the organization (requires organization-scoped API key) |\n| GET | /api/public/scim/Users/{userId} | Get a specific user by ID (requires organization-scoped API key) |\n| DELETE | /api/public/scim/Users/{userId} | Remove a user from the organization (requires organization-scoped API key). Note that this only removes the user from the organization but does not delete the user entity itself. |\n| POST | /api/public/score-configs | Create a score configuration (config). Score configs are used to define the structure of scores |\n| GET | /api/public/score-configs | Get all score configs |\n| GET | /api/public/score-configs/{configId} | Get a score config |\n| PATCH | /api/public/score-configs/{configId} | Update a score config |\n| GET | /api/public/v2/scores | Get a list of scores (supports both trace and session scores) |\n| GET | /api/public/v2/scores/{scoreId} | Get a score (supports both trace and session scores) |\n| GET | /api/public/sessions | Get sessions |\n| GET | /api/public/sessions/{sessionId} | Get a session. Please note that `traces` on this endpoint are not paginated, if you plan to fetch large sessions, consider `GET /api/public/traces?sessionId=` |\n| GET | /api/public/traces/{traceId} | Get a specific trace |\n| DELETE | /api/public/traces/{traceId} | Delete a specific trace |\n| GET | /api/public/traces | Get list of traces |\n| DELETE | /api/public/traces | Delete multiple traces |\n\n## Common Questions\nMatch user requests to endpoints in references/api-spec.lap. Key patterns:\n- \"List all annotation-queues?\" -> GET /api/public/annotation-queues\n- \"Create a annotation-queue?\" -> POST /api/public/annotation-queues\n- \"Get annotation-queue details?\" -> GET /api/public/annotation-queues/{queueId}\n- \"List all items?\" -> GET /api/public/annotation-queues/{queueId}/items\n- \"Create a item?\" -> POST /api/public/annotation-queues/{queueId}/items\n- \"Get item details?\" -> GET /api/public/annotation-queues/{queueId}/items/{itemId}\n- \"Partially update a item?\" -> PATCH /api/public/annotation-queues/{queueId}/items/{itemId}\n- \"Delete a item?\" -> DELETE /api/public/annotation-queues/{queueId}/items/{itemId}\n- \"Create a assignment?\" -> POST /api/public/annotation-queues/{queueId}/assignments\n- \"List all blob-storage?\" -> GET /api/public/integrations/blob-storage\n- \"Get blob-storage details?\" -> GET /api/public/integrations/blob-storage/{id}\n- \"Delete a blob-storage?\" -> DELETE /api/public/integrations/blob-storage/{id}\n- \"Create a comment?\" -> POST /api/public/comments\n- \"List all comments?\" -> GET /api/public/comments\n- \"Get comment details?\" -> GET /api/public/comments/{commentId}\n- \"Create a dataset-item?\" -> POST /api/public/dataset-items\n- \"List all dataset-items?\" -> GET /api/public/dataset-items\n- \"Get dataset-item details?\" -> GET /api/public/dataset-items/{id}\n- \"Delete a dataset-item?\" -> DELETE /api/public/dataset-items/{id}\n- \"Create a dataset-run-item?\" -> POST /api/public/dataset-run-items\n- \"List all dataset-run-items?\" -> GET /api/public/dataset-run-items\n- \"List all datasets?\" -> GET /api/public/v2/datasets\n- \"Create a dataset?\" -> POST /api/public/v2/datasets\n- \"Get dataset details?\" -> GET /api/public/v2/datasets/{datasetName}\n- \"Get run details?\" -> GET /api/public/datasets/{datasetName}/runs/{runName}\n- \"Delete a run?\" -> DELETE /api/public/datasets/{datasetName}/runs/{runName}\n- \"List all runs?\" -> GET /api/public/datasets/{datasetName}/runs\n- \"List all health?\" -> GET /api/public/health\n- \"Create a ingestion?\" -> POST /api/public/ingestion\n- \"Search metrics?\" -> GET /api/public/metrics\n- \"Get observation details?\" -> GET /api/public/observations/{observationId}\n- \"List all observations?\" -> GET /api/public/observations\n- \"Create a score?\" -> POST /api/public/scores\n- \"Delete a score?\" -> DELETE /api/public/scores/{scoreId}\n- \"List all llm-connections?\" -> GET /api/public/llm-connections\n- \"Get media details?\" -> GET /api/public/media/{mediaId}\n- \"Partially update a media?\" -> PATCH /api/public/media/{mediaId}\n- \"Create a media?\" -> POST /api/public/media\n- \"Create a model?\" -> POST /api/public/models\n- \"List all models?\" -> GET /api/public/models\n- \"Get model details?\" -> GET /api/public/models/{id}\n- \"Delete a model?\" -> DELETE /api/public/models/{id}\n- \"Create a trace?\" -> POST /api/public/otel/v1/traces\n- \"List all memberships?\" -> GET /api/public/organizations/memberships\n- \"List all projects?\" -> GET /api/public/organizations/projects\n- \"List all apiKeys?\" -> GET /api/public/organizations/apiKeys\n- \"Create a project?\" -> POST /api/public/projects\n- \"Update a project?\" -> PUT /api/public/projects/{projectId}\n- \"Delete a project?\" -> DELETE /api/public/projects/{projectId}\n- \"Create a apiKey?\" -> POST /api/public/projects/{projectId}/apiKeys\n- \"Delete a apiKey?\" -> DELETE /api/public/projects/{projectId}/apiKeys/{apiKeyId}\n- \"Partially update a version?\" -> PATCH /api/public/v2/prompts/{name}/versions/{version}\n- \"Get prompt details?\" -> GET /api/public/v2/prompts/{promptName}\n- \"Delete a prompt?\" -> DELETE /api/public/v2/prompts/{promptName}\n- \"List all prompts?\" -> GET /api/public/v2/prompts\n- \"Create a prompt?\" -> POST /api/public/v2/prompts\n- \"List all ServiceProviderConfig?\" -> GET /api/public/scim/ServiceProviderConfig\n- \"List all ResourceTypes?\" -> GET /api/public/scim/ResourceTypes\n- \"List all Schemas?\" -> GET /api/public/scim/Schemas\n- \"List all Users?\" -> GET /api/public/scim/Users\n- \"Create a User?\" -> POST /api/public/scim/Users\n- \"Get User details?\" -> GET /api/public/scim/Users/{userId}\n- \"Delete a User?\" -> DELETE /api/public/scim/Users/{userId}\n- \"Create a score-config?\" -> POST /api/public/score-configs\n- \"List all score-configs?\" -> GET /api/public/score-configs\n- \"Get score-config details?\" -> GET /api/public/score-configs/{configId}\n- \"Partially update a score-config?\" -> PATCH /api/public/score-configs/{configId}\n- \"List all scores?\" -> GET /api/public/v2/scores\n- \"Get score details?\" -> GET /api/public/v2/scores/{scoreId}\n- \"List all sessions?\" -> GET /api/public/sessions\n- \"Get session details?\" -> GET /api/public/sessions/{sessionId}\n- \"Get trace details?\" -> GET /api/public/traces/{traceId}\n- \"Delete a trace?\" -> DELETE /api/public/traces/{traceId}\n- \"List all traces?\" -> GET /api/public/traces\n- \"How to authenticate?\" -> See Auth section above\n\n## Response Tips\n- Check response schemas in references/api-spec.lap for field details\n- Paginated endpoints accept limit/offset or cursor parameters\n- Create/update endpoints return the modified resource on success\n- Error responses include status codes and descriptions in the spec\n\n## References\n- Full spec: See references/api-spec.lap for complete endpoint details, parameter tables, and response schemas\n\n> Generated from the official API spec by [LAP](https://lap.sh)\n","references/api-spec.lap":"@lap v0.3\n# Machine-readable API spec. Each @endpoint block is one API call.\n@api langfuse\n@auth Bearer basic\n@endpoints 87\n@hint download_for_search\n@toc api(87)\n\n@endpoint GET /api/public/annotation-queues\n@desc Get all annotation queues\n@optional {page: int # page number, starts at 1, limit: int # limit of items per page}\n@returns(200) {data: [map], meta: map{page: int, limit: int, totalItems: int, totalPages: int}}\n@errors {400, 401, 403, 404, 405}\n\n@endpoint POST /api/public/annotation-queues\n@desc Create an annotation queue\n@required {name: str, scoreConfigIds: [str]}\n@optional {description: str}\n@returns(200) {id: str, name: str, description: str?, scoreConfigIds: [str], createdAt: str(date-time), updatedAt: str(date-time)}\n@errors {400, 401, 403, 404, 405}\n\n@endpoint GET /api/public/annotation-queues/{queueId}\n@desc Get an annotation queue by ID\n@required {queueId: str # The unique identifier of the annotation queue}\n@returns(200) {id: str, name: str, description: str?, scoreConfigIds: [str], createdAt: str(date-time), updatedAt: str(date-time)}\n@errors {400, 401, 403, 404, 405}\n\n@endpoint GET /api/public/annotation-queues/{queueId}/items\n@desc Get items for a specific annotation queue\n@required {queueId: str # The unique identifier of the annotation queue}\n@optional {status: str # Filter by status, page: int # page number, starts at 1, limit: int # limit of items per page}\n@returns(200) {data: [map], meta: map{page: int, limit: int, totalItems: int, totalPages: int}}\n@errors {400, 401, 403, 404, 405}\n\n@endpoint POST /api/public/annotation-queues/{queueId}/items\n@desc Add an item to an annotation queue\n@required {queueId: str # The unique identifier of the annotation queue, objectId: str, objectType: str(TRACE/OBSERVATION/SESSION)}\n@optional {status: str(PENDING/COMPLETED)}\n@returns(200) {id: str, queueId: str, objectId: str, objectType: str, status: str, completedAt: str(date-time)?, createdAt: str(date-time), updatedAt: str(date-time)}\n@errors {400, 401, 403, 404, 405}\n\n@endpoint GET /api/public/annotation-queues/{queueId}/items/{itemId}\n@desc Get a specific item from an annotation queue\n@required {queueId: str # The unique identifier of the annotation queue, itemId: str # The unique identifier of the annotation queue item}\n@returns(200) {id: str, queueId: str, objectId: str, objectType: str, status: str, completedAt: str(date-time)?, createdAt: str(date-time), updatedAt: str(date-time)}\n@errors {400, 401, 403, 404, 405}\n\n@endpoint PATCH /api/public/annotation-queues/{queueId}/items/{itemId}\n@desc Update an annotation queue item\n@required {queueId: str # The unique identifier of the annotation queue, itemId: str # The unique identifier of the annotation queue item}\n@optional {status: str(PENDING/COMPLETED)}\n@returns(200) {id: str, queueId: str, objectId: str, objectType: str, status: str, completedAt: str(date-time)?, createdAt: str(date-time), updatedAt: str(date-time)}\n@errors {400, 401, 403, 404, 405}\n\n@endpoint DELETE /api/public/annotation-queues/{queueId}/items/{itemId}\n@desc Remove an item from an annotation queue\n@required {queueId: str # The unique identifier of the annotation queue, itemId: str # The unique identifier of the annotation queue item}\n@returns(200) {success: bool, message: str}\n@errors {400, 401, 403, 404, 405}\n\n@endpoint POST /api/public/annotation-queues/{queueId}/assignments\n@desc Create an assignment for a user to an annotation queue\n@required {queueId: str # The unique identifier of the annotation queue, userId: str}\n@returns(200) {userId: str, queueId: str, projectId: str}\n@errors {400, 401, 403, 404, 405}\n\n@endpoint DELETE /api/public/annotation-queues/{queueId}/assignments\n@desc Delete an assignment for a user to an annotation queue\n@required {queueId: str # The unique identifier of the annotation queue, userId: str}\n@returns(200) {success: bool}\n@errors {400, 401, 403, 404, 405}\n\n@endpoint GET /api/public/integrations/blob-storage\n@desc Get all blob storage integrations for the organization (requires organization-scoped API key)\n@returns(200) {data: [map]}\n@errors {400, 401, 403, 404, 405}\n\n@endpoint PUT /api/public/integrations/blob-storage\n@desc Create or update a blob storage integration for a specific project (requires organization-scoped API key). The configuration is validated by performing a test upload to the bucket.\n@required {projectId: str # ID of the project in which to configure the blob storage integration, type: str(S3/S3_COMPATIBLE/AZURE_BLOB_STORAGE), bucketName: str # Name of the storage bucket, region: str # Storage region, exportFrequency: str(hourly/daily/weekly), enabled: bool # Whether the integration is active, forcePathStyle: bool # Use path-style URLs for S3 requests, fileType: str(JSON/CSV/JSONL), exportMode: str(FULL_HISTORY/FROM_TODAY/FROM_CUSTOM_DATE)}\n@optional {endpoint: str # Custom endpoint URL (required for S3_COMPATIBLE type), accessKeyId: str # Access key ID for authentication, secretAccessKey: str # Secret access key for authentication (will be encrypted when stored), prefix: str # Path prefix for exported files (must end with forward slash if provided), exportStartDate: str(date-time) # Custom start date for exports (required when exportMode is FROM_CUSTOM_DATE)}\n@returns(200) {id: str, projectId: str, type: str, bucketName: str, endpoint: str?, region: str, accessKeyId: str?, prefix: str, exportFrequency: str, enabled: bool, forcePathStyle: bool, fileType: str, exportMode: str, exportStartDate: str(date-time)?, nextSyncAt: str(date-time)?, lastSyncAt: str(date-time)?, lastError: str?, lastErrorAt: str(date-time)?, createdAt: str(date-time), updatedAt: str(date-time)}\n@errors {400, 401, 403, 404, 405}\n\n@endpoint GET /api/public/integrations/blob-storage/{id}\n@desc Get the sync status of a blob storage integration by integration ID (requires organization-scoped API key)\n@required {id: str}\n@returns(200) {id: str, projectId: str, syncStatus: str, enabled: bool, lastSyncAt: str(date-time)?, nextSyncAt: str(date-time)?, lastError: str?, lastErrorAt: str(date-time)?}\n@errors {400, 401, 403, 404, 405}\n\n@endpoint DELETE /api/public/integrations/blob-storage/{id}\n@desc Delete a blob storage integration by ID (requires organization-scoped API key)\n@required {id: str}\n@returns(200) {message: str}\n@errors {400, 401, 403, 404, 405}\n\n@endpoint POST /api/public/comments\n@desc Create a comment. Comments may be attached to different object types (trace, observation, session, prompt).\n@required {projectId: str # The id of the project to attach the comment to., objectType: str # The type of the object to attach the comment to (trace, observation, session, prompt)., objectId: str # The id of the object to attach the comment to. If this does not reference a valid existing object, an error will be thrown., content: str # The content of the comment. May include markdown. Currently limited to 5000 characters.}\n@optional {authorUserId: str # The id of the user who created the comment.}\n@returns(200) {id: str}\n@errors {400, 401, 403, 404, 405}\n\n@endpoint GET /api/public/comments\n@desc Get all comments\n@optional {page: int # Page number, starts at 1., limit: int # Limit of items per page. If you encounter api issues due to too large page sizes, try to reduce the limit, objectType: str # Filter comments by object type (trace, observation, session, prompt)., objectId: str # Filter comments by object id. If objectType is not provided, an error will be thrown., authorUserId: str # Filter comments by author user id.}\n@returns(200) {data: [map], meta: map{page: int, limit: int, totalItems: int, totalPages: int}}\n@errors {400, 401, 403, 404, 405}\n\n@endpoint GET /api/public/comments/{commentId}\n@desc Get a comment by id\n@required {commentId: str # The unique langfuse identifier of a comment}\n@returns(200) {id: str, projectId: str, createdAt: str(date-time), updatedAt: str(date-time), objectType: str, objectId: str, content: str, authorUserId: str?}\n@errors {400, 401, 403, 404, 405}\n\n@endpoint POST /api/public/dataset-items\n@desc Create a dataset item\n@required {datasetName: str}\n@optional {input: any, expectedOutput: any, metadata: any, sourceTraceId: str, sourceObservationId: str, id: str # Dataset items are upserted on their id. Id needs to be unique (project-level) and cannot be reused across datasets., status: str(ACTIVE/ARCHIVED)}\n@returns(200) {id: str, status: str, input: any, expectedOutput: any, metadata: any, sourceTraceId: str?, sourceObservationId: str?, datasetId: str, datasetName: str, createdAt: str(date-time), updatedAt: str(date-time)}\n@errors {400, 401, 403, 404, 405}\n\n@endpoint GET /api/public/dataset-items\n@desc Get dataset items. Optionally specify a version to get the items as they existed at that point in time.\n@optional {datasetName: str, sourceTraceId: str, sourceObservationId: str, version: str(date-time) # ISO 8601 timestamp (RFC 3339, Section 5.6) in UTC (e.g., \"2026-01-21T14:35:42Z\"). If provided, returns state of dataset at this timestamp. If not provided, returns the latest version. Requires datasetName to be specified., page: int # page number, starts at 1, limit: int # limit of items per page}\n@returns(200) {data: [map], meta: map{page: int, limit: int, totalItems: int, totalPages: int}}\n@errors {400, 401, 403, 404, 405}\n\n@endpoint GET /api/public/dataset-items/{id}\n@desc Get a dataset item\n@required {id: str}\n@returns(200) {id: str, status: str, input: any, expectedOutput: any, metadata: any, sourceTraceId: str?, sourceObservationId: str?, datasetId: str, datasetName: str, createdAt: str(date-time), updatedAt: str(date-time)}\n@errors {400, 401, 403, 404, 405}\n\n@endpoint DELETE /api/public/dataset-items/{id}\n@desc Delete a dataset item and all its run items. This action is irreversible.\n@required {id: str}\n@returns(200) {message: str}\n@errors {400, 401, 403, 404, 405}\n\n@endpoint POST /api/public/dataset-run-items\n@desc Create a dataset run item\n@required {runName: str, datasetItemId: str}\n@optional {runDescription: str # Description of the run. If run exists, description will be updated., metadata: any # Metadata of the dataset run, updates run if run already exists, observationId: str, traceId: str # traceId should always be provided. For compatibility with older SDK versions it can also be inferred from the provided observationId., datasetVersion: str(date-time) # ISO 8601 timestamp (RFC 3339, Section 5.6) in UTC (e.g., \"2026-01-21T14:35:42Z\"). Specifies the dataset version to use for this experiment run.  If provided, the experiment will use dataset items as they existed at or before this timestamp. If not provided, uses the latest version of dataset items., createdAt: str(date-time) # Optional timestamp to set the createdAt field of the dataset run item. If not provided or null, defaults to current timestamp.}\n@returns(200) {id: str, datasetRunId: str, datasetRunName: str, datasetItemId: str, traceId: str, observationId: str?, createdAt: str(date-time), updatedAt: str(date-time)}\n@errors {400, 401, 403, 404, 405}\n\n@endpoint GET /api/public/dataset-run-items\n@desc List dataset run items\n@required {datasetId: str, runName: str}\n@optional {page: int # page number, starts at 1, limit: int # limit of items per page}\n@returns(200) {data: [map], meta: map{page: int, limit: int, totalItems: int, totalPages: int}}\n@errors {400, 401, 403, 404, 405}\n\n@endpoint GET /api/public/v2/datasets\n@desc Get all datasets\n@optional {page: int # page number, starts at 1, limit: int # limit of items per page}\n@returns(200) {data: [map], meta: map{page: int, limit: int, totalItems: int, totalPages: int}}\n@errors {400, 401, 403, 404, 405}\n\n@endpoint POST /api/public/v2/datasets\n@desc Create a dataset\n@required {name: str}\n@optional {description: str, metadata: any, inputSchema: any # JSON Schema for validating dataset item inputs. When set, all new and existing dataset items will be validated against this schema., expectedOutputSchema: any # JSON Schema for validating dataset item expected outputs. When set, all new and existing dataset items will be validated against this schema.}\n@returns(200) {id: str, name: str, description: str?, metadata: any, inputSchema: any?, expectedOutputSchema: any?, projectId: str, createdAt: str(date-time), updatedAt: str(date-time)}\n@errors {400, 401, 403, 404, 405}\n\n@endpoint GET /api/public/v2/datasets/{datasetName}\n@desc Get a dataset\n@required {datasetName: str}\n@returns(200) {id: str, name: str, description: str?, metadata: any, inputSchema: any?, expectedOutputSchema: any?, projectId: str, createdAt: str(date-time), updatedAt: str(date-time)}\n@errors {400, 401, 403, 404, 405}\n\n@endpoint GET /api/public/datasets/{datasetName}/runs/{runName}\n@desc Get a dataset run and its items\n@required {datasetName: str, runName: str}\n@returns(200) {datasetRunItems: [map]}\n@errors {400, 401, 403, 404, 405}\n\n@endpoint DELETE /api/public/datasets/{datasetName}/runs/{runName}\n@desc Delete a dataset run and all its run items. This action is irreversible.\n@required {datasetName: str, runName: str}\n@returns(200) {message: str}\n@errors {400, 401, 403, 404, 405}\n\n@endpoint GET /api/public/datasets/{datasetName}/runs\n@desc Get dataset runs\n@required {datasetName: str}\n@optional {page: int # page number, starts at 1, limit: int # limit of items per page}\n@returns(200) {data: [map], meta: map{page: int, limit: int, totalItems: int, totalPages: int}}\n@errors {400, 401, 403, 404, 405}\n\n@endpoint GET /api/public/health\n@desc Check health of API and database\n@returns(200) {version: str, status: str}\n@errors {400, 401, 403, 404, 405, 503}\n\n@endpoint POST /api/public/ingestion\n@desc **Legacy endpoint for batch ingestion for Langfuse Observability.**\n@required {batch: [any] # Batch of tracing events to be ingested. Discriminated by attribute `type`.}\n@optional {metadata: any # Optional. Metadata field used by the Langfuse SDKs for debugging.}\n@returns(207) {successes: [map], errors: [map]}\n@errors {400, 401, 403, 404, 405}\n@example_request {\"batch\":[{\"id\":\"abcdef-1234-5678-90ab\",\"timestamp\":\"2022-01-01T00:00:00.000Z\",\"type\":\"trace-create\",\"body\":{\"id\":\"abcdef-1234-5678-90ab\",\"timestamp\":\"2022-01-01T00:00:00.000Z\",\"environment\":\"production\",\"name\":\"My Trace\",\"userId\":\"1234-5678-90ab-cdef\",\"input\":\"My input\",\"output\":\"My output\",\"sessionId\":\"1234-5678-90ab-cdef\",\"release\":\"1.0.0\",\"version\":\"1.0.0\",\"metadata\":\"My metadata\",\"tags\":[\"tag1\",\"tag2\"],\"public\":true}}]}\n\n@endpoint GET /api/public/metrics\n@desc Get metrics from the Langfuse project using a query object.\n@required {query: str # JSON string containing the query parameters with the following structure: ```json {   \"view\": string,           // Required. One of \"traces\", \"observations\", \"scores-numeric\", \"scores-categorical\"   \"dimensions\": [           // Optional. Default: []     {       \"field\": string       // Field to group by, e.g. \"name\", \"userId\", \"sessionId\"     }   ],   \"metrics\": [              // Required. At least one metric must be provided     {       \"measure\": string,    // What to measure, e.g. \"count\", \"latency\", \"value\"       \"aggregation\": string // How to aggregate, e.g. \"count\", \"sum\", \"avg\", \"p95\", \"histogram\"     }   ],   \"filters\": [              // Optional. Default: []     {       \"column\": string,     // Column to filter on       \"operator\": string,   // Operator, e.g. \"=\", \">\", \"<\", \"contains\"       \"value\": any,         // Value to compare against       \"type\": string,       // Data type, e.g. \"string\", \"number\", \"stringObject\"       \"key\": string         // Required only when filtering on metadata     }   ],   \"timeDimension\": {        // Optional. Default: null. If provided, results will be grouped by time     \"granularity\": string   // One of \"minute\", \"hour\", \"day\", \"week\", \"month\", \"auto\"   },   \"fromTimestamp\": string,  // Required. ISO datetime string for start of time range   \"toTimestamp\": string,    // Required. ISO datetime string for end of time range   \"orderBy\": [              // Optional. Default: null     {       \"field\": string,      // Field to order by       \"direction\": string   // \"asc\" or \"desc\"     }   ],   \"config\": {               // Optional. Query-specific configuration     \"bins\": number,         // Optional. Number of bins for histogram (1-100), default: 10     \"row_limit\": number     // Optional. Row limit for results (1-1000)   } } ```}\n@returns(200) {data: [map]}\n@errors {400, 401, 403, 404, 405}\n\n@endpoint GET /api/public/observations/{observationId}\n@desc Get a observation\n@required {observationId: str # The unique langfuse identifier of an observation, can be an event, span or generation}\n@returns(200) {promptName: str?, promptVersion: int?, modelId: str?, inputPrice: num(double)?, outputPrice: num(double)?, totalPrice: num(double)?, calculatedInputCost: num(double)?, calculatedOutputCost: num(double)?, calculatedTotalCost: num(double)?, latency: num(double)?, timeToFirstToken: num(double)?}\n@errors {400, 401, 403, 404, 405}\n\n@endpoint GET /api/public/observations\n@desc Get a list of observations.\n@optional {page: int # Page number, starts at 1., limit: int # Limit of items per page. If you encounter api issues due to too large page sizes, try to reduce the limit., name: str, userId: str, type: str, traceId: str, level: str # Optional filter for observations with a specific level (e.g. \"DEBUG\", \"DEFAULT\", \"WARNING\", \"ERROR\")., parentObservationId: str, environment: [str] # Optional filter for observations where the environment is one of the provided values., fromStartTime: str(date-time) # Retrieve only observations with a start_time on or after this datetime (ISO 8601)., toStartTime: str(date-time) # Retrieve only observations with a start_time before this datetime (ISO 8601)., version: str # Optional filter to only include observations with a certain version., filter: str # JSON string containing an array of filter conditions. When provided, this takes precedence over query parameter filters (userId, name, type, level, environment, fromStartTime, ...).  ## Filter Structure Each filter condition has the following structure: ```json [   {     \"type\": string,           // Required. One of: \"datetime\", \"string\", \"number\", \"stringOptions\", \"categoryOptions\", \"arrayOptions\", \"stringObject\", \"numberObject\", \"boolean\", \"null\"     \"column\": string,         // Required. Column to filter on (see available columns below)     \"operator\": string,       // Required. Operator based on type:                               // - datetime: \">\", \"=\", \"\", \"=\", \"\", \"=\", \"\"                               // - null: \"is null\", \"is not null\"     \"value\": any,             // Required (except for null type). Value to compare against. Type depends on filter type     \"key\": string             // Required only for stringObject, numberObject, and categoryOptions types when filtering on nested fields like metadata   } ] ```  ## Available Columns  ### Core Observation Fields - `id` (string) - Observation ID - `type` (string) - Observation type (SPAN, GENERATION, EVENT) - `name` (string) - Observation name - `traceId` (string) - Associated trace ID - `startTime` (datetime) - Observation start time - `endTime` (datetime) - Observation end time - `environment` (string) - Environment tag - `level` (string) - Log level (DEBUG, DEFAULT, WARNING, ERROR) - `statusMessage` (string) - Status message - `version` (string) - Version tag  ### Performance Metrics - `latency` (number) - Latency in seconds (calculated: end_time - start_time) - `timeToFirstToken` (number) - Time to first token in seconds - `tokensPerSecond` (number) - Output tokens per second  ### Token Usage - `inputTokens` (number) - Number of input tokens - `outputTokens` (number) - Number of output tokens - `totalTokens` (number) - Total tokens (alias: `tokens`)  ### Cost Metrics - `inputCost` (number) - Input cost in USD - `outputCost` (number) - Output cost in USD - `totalCost` (number) - Total cost in USD  ### Model Information - `model` (string) - Provided model name - `promptName` (string) - Associated prompt name - `promptVersion` (number) - Associated prompt version  ### Structured Data - `metadata` (stringObject/numberObject/categoryOptions) - Metadata key-value pairs. Use `key` parameter to filter on specific metadata keys.  ### Associated Trace Fields (requires join with traces table) - `userId` (string) - User ID from associated trace - `traceName` (string) - Name from associated trace - `traceEnvironment` (string) - Environment from associated trace - `traceTags` (arrayOptions) - Tags from associated trace  ## Filter Examples ```json [   {     \"type\": \"string\",     \"column\": \"type\",     \"operator\": \"=\",     \"value\": \"GENERATION\"   },   {     \"type\": \"number\",     \"column\": \"latency\",     \"operator\": \">=\",     \"value\": 2.5   },   {     \"type\": \"stringObject\",     \"column\": \"metadata\",     \"key\": \"environment\",     \"operator\": \"=\",     \"value\": \"production\"   } ] ```}\n@returns(200) {data: [map], meta: map{page: int, limit: int, totalItems: int, totalPages: int}}\n@errors {400, 401, 403, 404, 405}\n\n@endpoint POST /api/public/scores\n@desc Create a score (supports both trace and session scores)\n@required {name: str, value: any # The value of the score. Must be passed as string for categorical scores, and numeric for boolean and numeric scores}\n@optional {id: str, traceId: str, sessionId: str, observationId: str, datasetRunId: str, comment: str, metadata: map, environment: str # The environment of the score. Can be any lowercase alphanumeric string with hyphens and underscores that does not start with 'langfuse'., queueId: str # The annotation queue referenced by the score. Indicates if score was initially created while processing annotation queue., dataType: str(NUMERIC/BOOLEAN/CATEGORICAL/CORRECTION), configId: str # Reference a score config on a score. The unique langfuse identifier of a score config. When passing this field, the dataType and stringValue fields are automatically populated.}\n@returns(200) {id: str}\n@errors {400, 401, 403, 404, 405}\n\n@endpoint DELETE /api/public/scores/{scoreId}\n@desc Delete a score (supports both trace and session scores)\n@required {scoreId: str # The unique langfuse identifier of a score}\n@returns(204)\n@errors {400, 401, 403, 404, 405}\n\n@endpoint GET /api/public/llm-connections\n@desc Get all LLM connections in a project\n@optional {page: int # page number, starts at 1, limit: int # limit of items per page}\n@returns(200) {data: [map], meta: map{page: int, limit: int, totalItems: int, totalPages: int}}\n@errors {400, 401, 403, 404, 405}\n\n@endpoint PUT /api/public/llm-connections\n@desc Create or update an LLM connection. The connection is upserted on provider.\n@required {provider: str # Provider name (e.g., 'openai', 'my-gateway'). Must be unique in project, used for upserting., adapter: str(anthropic/openai/azure/bedrock/google-vertex-ai/google-ai-studio), secretKey: str # Secret key for the LLM API.}\n@optional {baseURL: str # Custom base URL for the LLM API, customModels: [str] # List of custom model names, withDefaultModels: bool # Whether to include default models. Default is true., extraHeaders: map # Extra headers to send with requests, config: map # Adapter-specific configuration. Validation rules: - **Bedrock**: Required. Must be `{\"region\": \"\"}` (e.g., `{\"region\":\"us-east-1\"}`) - **VertexAI**: Optional. If provided, must be `{\"location\": \"\"}` (e.g., `{\"location\":\"us-central1\"}`) - **Other adapters**: Not supported. Omit this field or set to null.}\n@returns(200) {id: str, provider: str, adapter: str, displaySecretKey: str, baseURL: str?, customModels: [str], withDefaultModels: bool, extraHeaderKeys: [str], config: map?, createdAt: str(date-time), updatedAt: str(date-time)}\n@errors {400, 401, 403, 404, 405}\n\n@endpoint GET /api/public/media/{mediaId}\n@desc Get a media record\n@required {mediaId: str # The unique langfuse identifier of a media record}\n@returns(200) {mediaId: str, contentType: str, contentLength: int, uploadedAt: str(date-time), url: str, urlExpiry: str}\n@errors {400, 401, 403, 404, 405}\n\n@endpoint PATCH /api/public/media/{mediaId}\n@desc Patch a media record\n@required {mediaId: str # The unique langfuse identifier of a media record, uploadedAt: str(date-time) # The date and time when the media record was uploaded, uploadHttpStatus: int # The HTTP status code of the upload}\n@optional {uploadHttpError: str # The HTTP error message of the upload, uploadTimeMs: int # The time in milliseconds it took to upload the media record}\n@returns(204)\n@errors {400, 401, 403, 404, 405}\n\n@endpoint POST /api/public/media\n@desc Get a presigned upload URL for a media record\n@required {traceId: str # The trace ID associated with the media record, contentType: str(image/png/image/jpeg/image/jpg/image/webp/image/gif/image/svg+xml/image/tiff/image/bmp/image/avif/image/heic/audio/mpeg/audio/mp3/audio/wav/audio/ogg/audio/oga/audio/aac/audio/mp4/audio/flac/audio/opus/audio/webm/video/mp4/video/webm/video/ogg/video/mpeg/video/quicktime/video/x-msvideo/video/x-matroska/text/plain/text/html/text/css/text/csv/text/markdown/text/x-python/application/javascript/text/x-typescript/application/x-yaml/application/pdf/application/msword/application/vnd.ms-excel/application/vnd.openxmlformats-officedocument.spreadsheetml.sheet/application/zip/application/json/application/xml/application/octet-stream/application/vnd.openxmlformats-officedocument.wordprocessingml.document/application/vnd.openxmlformats-officedocument.presentationml.presentation/application/rtf/application/x-ndjson/application/vnd.apache.parquet/application/gzip/application/x-tar/application/x-7z-compressed) # The MIME type of the media record, contentLength: int # The size of the media record in bytes, sha256Hash: str # The SHA-256 hash of the media record, field: str # The trace / observation field the media record is associated with. This can be one of `input`, `output`, `metadata`}\n@optional {observationId: str # The observation ID associated with the media record. If the media record is associated directly with a trace, this will be null.}\n@returns(200) {uploadUrl: str?, mediaId: str}\n@errors {400, 401, 403, 404, 405}\n\n@endpoint GET /api/public/v2/metrics\n@desc Get metrics from the Langfuse project using a query object. V2 endpoint with optimized performance.\n@required {query: str # JSON string containing the query parameters with the following structure: ```json {   \"view\": string,           // Required. One of \"observations\", \"scores-numeric\", \"scores-categorical\"   \"dimensions\": [           // Optional. Default: []     {       \"field\": string       // Field to group by (see available dimensions above)     }   ],   \"metrics\": [              // Required. At least one metric must be provided     {       \"measure\": string,    // What to measure (see available measures above)       \"aggregation\": string // How to aggregate: \"sum\", \"avg\", \"count\", \"max\", \"min\", \"p50\", \"p75\", \"p90\", \"p95\", \"p99\", \"histogram\"     }   ],   \"filters\": [              // Optional. Default: []     {       \"column\": string,     // Column to filter on (any dimension field)       \"operator\": string,   // Operator based on type:                             // - datetime: \">\", \"=\", \"\", \"=\", \"\"                             // - null: \"is null\", \"is not null\"       \"value\": any,         // Value to compare against       \"type\": string,       // Data type: \"datetime\", \"string\", \"number\", \"stringOptions\", \"categoryOptions\", \"arrayOptions\", \"stringObject\", \"numberObject\", \"boolean\", \"null\"       \"key\": string         // Required only for stringObject/numberObject types (e.g., metadata filtering)     }   ],   \"timeDimension\": {        // Optional. Default: null. If provided, results will be grouped by time     \"granularity\": string   // One of \"auto\", \"minute\", \"hour\", \"day\", \"week\", \"month\"   },   \"fromTimestamp\": string,  // Required. ISO datetime string for start of time range   \"toTimestamp\": string,    // Required. ISO datetime string for end of time range (must be after fromTimestamp)   \"orderBy\": [              // Optional. Default: null     {       \"field\": string,      // Field to order by (dimension or metric alias)       \"direction\": string   // \"asc\" or \"desc\"     }   ],   \"config\": {               // Optional. Query-specific configuration     \"bins\": number,         // Optional. Number of bins for histogram aggregation (1-100), default: 10     \"row_limit\": number     // Optional. Maximum number of rows to return (1-1000), default: 100   } } ```}\n@returns(200) {data: [map]}\n@errors {400, 401, 403, 404, 405}\n\n@endpoint POST /api/public/models\n@desc Create a model\n@required {modelName: str # Name of the model definition. If multiple with the same name exist, they are applied in the following order: (1) custom over built-in, (2) newest according to startTime where model.startTime<observation.startTime, matchPattern: str # Regex pattern which matches this model definition to generation.model. Useful in case of fine-tuned models. If you want to exact match, use `(?i)^modelname$`}\n@optional {startDate: str(date-time) # Apply only to generations which are newer than this ISO date., unit: str(CHARACTERS/TOKENS/MILLISECONDS/SECONDS/IMAGES/REQUESTS) # Unit of usage in Langfuse, inputPrice: num(double) # Deprecated. Use 'pricingTiers' instead. Price (USD) per input unit. Creates a default tier if pricingTiers not provided., outputPrice: num(double) # Deprecated. Use 'pricingTiers' instead. Price (USD) per output unit. Creates a default tier if pricingTiers not provided., totalPrice: num(double) # Deprecated. Use 'pricingTiers' instead. Price (USD) per total units. Cannot be set if input or output price is set. Creates a default tier if pricingTiers not provided., pricingTiers: [map{name!: str, isDefault!: bool, priority!: int, conditions!: [map], prices!: map}] # Optional. Array of pricing tiers for this model.  Use pricing tiers for all models - both those with threshold-based pricing variations and those with simple flat pricing:  - For models with standard flat pricing: Create a single default tier with your prices   (e.g., one tier with isDefault=true, priority=0, conditions=[], and your standard prices)  - For models with threshold-based pricing: Create a default tier plus additional conditional tiers   (e.g., default tier for standard usage + high-volume tier for usage above certain thresholds)  Requirements: - Cannot be provided with flat prices (inputPrice/outputPrice/totalPrice) - use one approach or the other - Must include exactly one default tier with isDefault=true, priority=0, and conditions=[] - All tier names and priorities must be unique within the model - Each tier must define at least one price  If omitted, you must provide flat prices instead (inputPrice/outputPrice/totalPrice), which will automatically create a single default tier named \"Standard\"., tokenizerId: str # Optional. Tokenizer to be applied to observations which match to this model. See docs for more details., tokenizerConfig: any # Optional. Configuration for the selected tokenizer. Needs to be JSON. See docs for more details.}\n@returns(200) {id: str, modelName: str, matchPattern: str, startDate: str(date-time)?, unit: str, inputPrice: num(double)?, outputPrice: num(double)?, totalPrice: num(double)?, tokenizerId: str?, tokenizerConfig: any, isLangfuseManaged: bool, createdAt: str(date-time), prices: map, pricingTiers: [map]}\n@errors {400, 401, 403, 404, 405}\n\n@endpoint GET /api/public/models\n@desc Get all models\n@optional {page: int # page number, starts at 1, limit: int # limit of items per page}\n@returns(200) {data: [map], meta: map{page: int, limit: int, totalItems: int, totalPages: int}}\n@errors {400, 401, 403, 404, 405}\n\n@endpoint GET /api/public/models/{id}\n@desc Get a model\n@required {id: str}\n@returns(200) {id: str, modelName: str, matchPattern: str, startDate: str(date-time)?, unit: str, inputPrice: num(double)?, outputPrice: num(double)?, totalPrice: num(double)?, tokenizerId: str?, tokenizerConfig: any, isLangfuseManaged: bool, createdAt: str(date-time), prices: map, pricingTiers: [map]}\n@errors {400, 401, 403, 404, 405}\n\n@endpoint DELETE /api/public/models/{id}\n@desc Delete a model. Cannot delete models managed by Langfuse. You can create your own definition with the same modelName to override the definition though.\n@required {id: str}\n@returns(204)\n@errors {400, 401, 403, 404, 405}\n\n@endpoint GET /api/public/v2/observations\n@desc Get a list of observations with cursor-based pagination and flexible field selection.\n@optional {fields: str # Comma-separated list of field groups to include in the response. Available groups: core, basic, time, io, metadata, model, usage, prompt, metrics. If not specified, `core` and `basic` field groups are returned. Example: \"basic,usage,model\", expandMetadata: str # Comma-separated list of metadata keys to return non-truncated. By default, metadata values over 200 characters are truncated. Use this parameter to retrieve full values for specific keys. Example: \"key1,key2\", limit: int # Number of items to return per page. Maximum 1000, default 50., cursor: str # Base64-encoded cursor for pagination. Use the cursor from the previous response to get the next page., parseIoAsJson: bool # **Deprecated.** Setting this to `true` will return a 400 error. Input/output fields are always returned as raw strings. Remove this parameter or set it to `false`., name: str, userId: str, type: str # Filter by observation type (e.g., \"GENERATION\", \"SPAN\", \"EVENT\", \"AGENT\", \"TOOL\", \"CHAIN\", \"RETRIEVER\", \"EVALUATOR\", \"EMBEDDING\", \"GUARDRAIL\"), traceId: str, level: str # Optional filter for observations with a specific level (e.g. \"DEBUG\", \"DEFAULT\", \"WARNING\", \"ERROR\")., parentObservationId: str, environment: [str] # Optional filter for observations where the environment is one of the provided values., fromStartTime: str(date-time) # Retrieve only observations with a start_time on or after this datetime (ISO 8601)., toStartTime: str(date-time) # Retrieve only observations with a start_time before this datetime (ISO 8601)., version: str # Optional filter to only include observations with a certain version., filter: str # JSON string containing an array of filter conditions. When provided, this takes precedence over query parameter filters (userId, name, type, level, environment, fromStartTime, ...).  ## Filter Structure Each filter condition has the following structure: ```json [   {     \"type\": string,           // Required. One of: \"datetime\", \"string\", \"number\", \"stringOptions\", \"categoryOptions\", \"arrayOptions\", \"stringObject\", \"numberObject\", \"boolean\", \"null\"     \"column\": string,         // Required. Column to filter on (see available columns below)     \"operator\": string,       // Required. Operator based on type:                               // - datetime: \">\", \"=\", \"\", \"=\", \"\", \"=\", \"\"                               // - null: \"is null\", \"is not null\"     \"value\": any,             // Required (except for null type). Value to compare against. Type depends on filter type     \"key\": string             // Required only for stringObject, numberObject, and categoryOptions types when filtering on nested fields like metadata   } ] ```  ## Available Columns  ### Core Observation Fields - `id` (string) - Observation ID - `type` (string) - Observation type (SPAN, GENERATION, EVENT) - `name` (string) - Observation name - `traceId` (string) - Associated trace ID - `startTime` (datetime) - Observation start time - `endTime` (datetime) - Observation end time - `environment` (string) - Environment tag - `level` (string) - Log level (DEBUG, DEFAULT, WARNING, ERROR) - `statusMessage` (string) - Status message - `version` (string) - Version tag - `userId` (string) - User ID - `sessionId` (string) - Session ID  ### Trace-Related Fields - `traceName` (string) - Name of the parent trace - `traceTags` (arrayOptions) - Tags from the parent trace - `tags` (arrayOptions) - Alias for traceTags  ### Performance Metrics - `latency` (number) - Latency in seconds (calculated: end_time - start_time) - `timeToFirstToken` (number) - Time to first token in seconds - `tokensPerSecond` (number) - Output tokens per second  ### Token Usage - `inputTokens` (number) - Number of input tokens - `outputTokens` (number) - Number of output tokens - `totalTokens` (number) - Total tokens (alias: `tokens`)  ### Cost Metrics - `inputCost` (number) - Input cost in USD - `outputCost` (number) - Output cost in USD - `totalCost` (number) - Total cost in USD  ### Model Information - `model` (string) - Provided model name (alias: `providedModelName`) - `promptName` (string) - Associated prompt name - `promptVersion` (number) - Associated prompt version  ### Structured Data - `metadata` (stringObject/numberObject/categoryOptions) - Metadata key-value pairs. Use `key` parameter to filter on specific metadata keys.  ## Filter Examples ```json [   {     \"type\": \"string\",     \"column\": \"type\",     \"operator\": \"=\",     \"value\": \"GENERATION\"   },   {     \"type\": \"number\",     \"column\": \"latency\",     \"operator\": \">=\",     \"value\": 2.5   },   {     \"type\": \"stringObject\",     \"column\": \"metadata\",     \"key\": \"environment\",     \"operator\": \"=\",     \"value\": \"production\"   } ] ```}\n@returns(200) {data: [map], meta: map{cursor: str?}}\n@errors {400, 401, 403, 404, 405}\n\n@endpoint POST /api/public/otel/v1/traces\n@desc **OpenTelemetry Traces Ingestion Endpoint**\n@required {resourceSpans: [map{resource: map, scopeSpans: [map]}] # Array of resource spans containing trace data as defined in the OTLP specification}\n@returns(200)\n@errors {400, 401, 403, 404, 405}\n@example_request {\"resourceSpans\":[{\"resource\":{\"attributes\":[{\"key\":\"service.name\",\"value\":{\"stringValue\":\"my-service\"}},{\"key\":\"service.version\",\"value\":{\"stringValue\":\"1.0.0\"}}]},\"scopeSpans\":[{\"scope\":{\"name\":\"langfuse-sdk\",\"version\":\"2.60.3\"},\"spans\":[{\"traceId\":\"0123456789abcdef0123456789abcdef\",\"spanId\":\"0123456789abcdef\",\"name\":\"my-operation\",\"kind\":1,\"startTimeUnixNano\":\"1747872000000000000\",\"endTimeUnixNano\":\"1747872001000000000\",\"attributes\":[{\"key\":\"langfuse.observation.type\",\"value\":{\"stringValue\":\"generation\"}}],\"status\":{}}]}]}]}\n\n@endpoint GET /api/public/organizations/memberships\n@desc Get all memberships for the organization associated with the API key (requires organization-scoped API key)\n@returns(200) {memberships: [map]}\n@errors {400, 401, 403, 404, 405}\n\n@endpoint PUT /api/public/organizations/memberships\n@desc Create or update a membership for the organization associated with the API key (requires organization-scoped API key)\n@required {userId: str, role: str(OWNER/ADMIN/MEMBER/VIEWER)}\n@returns(200) {userId: str, role: str, email: str, name: str}\n@errors {400, 401, 403, 404, 405}\n\n@endpoint DELETE /api/public/organizations/memberships\n@desc Delete a membership from the organization associated with the API key (requires organization-scoped API key)\n@required {userId: str}\n@returns(200) {message: str, userId: str}\n@errors {400, 401, 403, 404, 405}\n\n@endpoint GET /api/public/projects/{projectId}/memberships\n@desc Get all memberships for a specific project (requires organization-scoped API key)\n@required {projectId: str}\n@returns(200) {memberships: [map]}\n@errors {400, 401, 403, 404, 405}\n\n@endpoint PUT /api/public/projects/{projectId}/memberships\n@desc Create or update a membership for a specific project (requires organization-scoped API key). The user must already be a member of the organization.\n@required {projectId: str, userId: str, role: str(OWNER/ADMIN/MEMBER/VIEWER)}\n@returns(200) {userId: str, role: str, email: str, name: str}\n@errors {400, 401, 403, 404, 405}\n\n@endpoint DELETE /api/public/projects/{projectId}/memberships\n@desc Delete a membership from a specific project (requires organization-scoped API key). The user must be a member of the organization.\n@required {projectId: str, userId: str}\n@returns(200) {message: str, userId: str}\n@errors {400, 401, 403, 404, 405}\n\n@endpoint GET /api/public/organizations/projects\n@desc Get all projects for the organization associated with the API key (requires organization-scoped API key)\n@returns(200) {projects: [map]}\n@errors {400, 401, 403, 404, 405}\n\n@endpoint GET /api/public/organizations/apiKeys\n@desc Get all API keys for the organization associated with the API key (requires organization-scoped API key)\n@returns(200) {apiKeys: [map]}\n@errors {400, 401, 403, 404, 405}\n\n@endpoint GET /api/public/projects\n@desc Get Project associated with API key (requires project-scoped API key). You can use GET /api/public/organizations/projects to get all projects with an organization-scoped key.\n@returns(200) {data: [map]}\n@errors {400, 401, 403, 404, 405}\n\n@endpoint POST /api/public/projects\n@desc Create a new project (requires organization-scoped API key)\n@required {name: str, retention: int # Number of days to retain data. Must be 0 or at least 3 days. Requires data-retention entitlement for non-zero values. Optional.}\n@optional {metadata: map # Optional metadata for the project}\n@returns(200) {id: str, name: str, organization: map{id: str, name: str}, metadata: map, retentionDays: int?}\n@errors {400, 401, 403, 404, 405}\n\n@endpoint PUT /api/public/projects/{projectId}\n@desc Update a project by ID (requires organization-scoped API key).\n@required {projectId: str, name: str}\n@optional {metadata: map # Optional metadata for the project, retention: int # Number of days to retain data. Must be 0 or at least 3 days. Requires data-retention entitlement for non-zero values. Optional. Will retain existing retention setting if omitted.}\n@returns(200) {id: str, name: str, organization: map{id: str, name: str}, metadata: map, retentionDays: int?}\n@errors {400, 401, 403, 404, 405}\n\n@endpoint DELETE /api/public/projects/{projectId}\n@desc Delete a project by ID (requires organization-scoped API key). Project deletion is processed asynchronously.\n@required {projectId: str}\n@returns(202) {success: bool, message: str}\n@errors {400, 401, 403, 404, 405}\n\n@endpoint GET /api/public/projects/{projectId}/apiKeys\n@desc Get all API keys for a project (requires organization-scoped API key)\n@required {projectId: str}\n@returns(200) {apiKeys: [map]}\n@errors {400, 401, 403, 404, 405}\n\n@endpoint POST /api/public/projects/{projectId}/apiKeys\n@desc Create a new API key for a project (requires organization-scoped API key)\n@required {projectId: str}\n@optional {note: str # Optional note for the API key, publicKey: str # Optional predefined public key. Must start with 'pk-lf-'. If provided, secretKey must also be provided., secretKey: str # Optional predefined secret key. Must start with 'sk-lf-'. If provided, publicKey must also be provided.}\n@returns(200) {id: str, createdAt: str(date-time), publicKey: str, secretKey: str, displaySecretKey: str, note: str?}\n@errors {400, 401, 403, 404, 405}\n\n@endpoint DELETE /api/public/projects/{projectId}/apiKeys/{apiKeyId}\n@desc Delete an API key for a project (requires organization-scoped API key)\n@required {projectId: str, apiKeyId: str}\n@returns(200) {success: bool}\n@errors {400, 401, 403, 404, 405}\n\n@endpoint PATCH /api/public/v2/prompts/{name}/versions/{version}\n@desc Update labels for a specific prompt version\n@required {name: str # The name of the prompt. If the prompt is in a folder (e.g., \"folder/subfolder/prompt-name\"),  the folder path must be URL encoded., version: int # Version of the prompt to update, newLabels: [str] # New labels for the prompt version. Labels are unique across versions. The \"latest\" label is reserved and managed by Langfuse.}\n@returns(200)\n@errors {400, 401, 403, 404, 405}\n\n@endpoint GET /api/public/v2/prompts/{promptName}\n@desc Get a prompt\n@required {promptName: str # The name of the prompt. If the prompt is in a folder (e.g., \"folder/subfolder/prompt-name\"),  the folder path must be URL encoded.}\n@optional {version: int # Version of the prompt to be retrieved., label: str # Label of the prompt to be retrieved. Defaults to \"production\" if no label or version is set., resolve: bool # Resolve prompt dependencies before returning the prompt. Defaults to `true`. Set to `false` to return the raw stored prompt with dependency tags intact. This bypasses prompt caching and is intended for debugging or one-off jobs, not production runtime fetches.}\n@returns(200)\n@errors {400, 401, 403, 404, 405}\n\n@endpoint DELETE /api/public/v2/prompts/{promptName}\n@desc Delete prompt versions. If neither version nor label is specified, all versions of the prompt are deleted.\n@required {promptName: str # The name of the prompt}\n@optional {label: str # Optional label to filter deletion. If specified, deletes all prompt versions that have this label., version: int # Optional version to filter deletion. If specified, deletes only this specific version of the prompt.}\n@returns(204)\n@errors {400, 401, 403, 404, 405}\n\n@endpoint GET /api/public/v2/prompts\n@desc Get a list of prompt names with versions and labels\n@optional {name: str, label: str, tag: str, page: int # page number, starts at 1, limit: int # limit of items per page, fromUpdatedAt: str(date-time) # Optional filter to only include prompt versions created/updated on or after a certain datetime (ISO 8601), toUpdatedAt: str(date-time) # Optional filter to only include prompt versions created/updated before a certain datetime (ISO 8601)}\n@returns(200) {data: [map], meta: map{page: int, limit: int, totalItems: int, totalPages: int}}\n@errors {400, 401, 403, 404, 405}\n\n@endpoint POST /api/public/v2/prompts\n@desc Create a new version for the prompt with the given `name`\n@returns(200)\n@errors {400, 401, 403, 404, 405}\n\n@endpoint GET /api/public/scim/ServiceProviderConfig\n@desc Get SCIM Service Provider Configuration (requires organization-scoped API key)\n@returns(200) {schemas: [str], documentationUri: str, patch: map{supported: bool}, bulk: map{supported: bool, maxOperations: int, maxPayloadSize: int}, filter: map{supported: bool, maxResults: int}, changePassword: map{supported: bool}, sort: map{supported: bool}, etag: map{supported: bool}, authenticationSchemes: [map], meta: map{resourceType: str, location: str}}\n@errors {400, 401, 403, 404, 405}\n\n@endpoint GET /api/public/scim/ResourceTypes\n@desc Get SCIM Resource Types (requires organization-scoped API key)\n@returns(200) {schemas: [str], totalResults: int, Resources: [map]}\n@errors {400, 401, 403, 404, 405}\n\n@endpoint GET /api/public/scim/Schemas\n@desc Get SCIM Schemas (requires organization-scoped API key)\n@returns(200) {schemas: [str], totalResults: int, Resources: [map]}\n@errors {400, 401, 403, 404, 405}\n\n@endpoint GET /api/public/scim/Users\n@desc List users in the organization (requires organization-scoped API key)\n@optional {filter: str # Filter expression (e.g. userName eq \"value\"), startIndex: int # 1-based index of the first result to return (default 1), count: int # Maximum number of results to return (default 100)}\n@returns(200) {schemas: [str], totalResults: int, startIndex: int, itemsPerPage: int, Resources: [map]}\n@errors {400, 401, 403, 404, 405}\n\n@endpoint POST /api/public/scim/Users\n@desc Create a new user in the organization (requires organization-scoped API key)\n@required {userName: str # User's email address (required), name: map{formatted: str}}\n@optional {emails: [map{primary!: bool, value!: str, type!: str}] # User's email addresses, active: bool # Whether the user is active, password: str # Initial password for the user}\n@returns(200) {schemas: [str], id: str, userName: str, name: map{formatted: str?}, emails: [map], meta: map{resourceType: str, created: str?, lastModified: str?}}\n@errors {400, 401, 403, 404, 405}\n\n@endpoint GET /api/public/scim/Users/{userId}\n@desc Get a specific user by ID (requires organization-scoped API key)\n@required {userId: str}\n@returns(200) {schemas: [str], id: str, userName: str, name: map{formatted: str?}, emails: [map], meta: map{resourceType: str, created: str?, lastModified: str?}}\n@errors {400, 401, 403, 404, 405}\n\n@endpoint DELETE /api/public/scim/Users/{userId}\n@desc Remove a user from the organization (requires organization-scoped API key). Note that this only removes the user from the organization but does not delete the user entity itself.\n@required {userId: str}\n@returns(200)\n@errors {400, 401, 403, 404, 405}\n\n@endpoint POST /api/public/score-configs\n@desc Create a score configuration (config). Score configs are used to define the structure of scores\n@required {name: str, dataType: str(NUMERIC/BOOLEAN/CATEGORICAL)}\n@optional {categories: [map{value!: num(double), label!: str}] # Configure custom categories for categorical scores. Pass a list of objects with `label` and `value` properties. Categories are autogenerated for boolean configs and cannot be passed, minValue: num(double) # Configure a minimum value for numerical scores. If not set, the minimum value defaults to -∞, maxValue: num(double) # Configure a maximum value for numerical scores. If not set, the maximum value defaults to +∞, description: str # Description is shown across the Langfuse UI and can be used to e.g. explain the config categories in detail, why a numeric range was set, or provide additional context on config name or usage}\n@returns(200) {id: str, name: str, createdAt: str(date-time), updatedAt: str(date-time), projectId: str, dataType: str, isArchived: bool, minValue: num(double)?, maxValue: num(double)?, categories: [map]?, description: str?}\n@errors {400, 401, 403, 404, 405}\n\n@endpoint GET /api/public/score-configs\n@desc Get all score configs\n@optional {page: int # Page number, starts at 1., limit: int # Limit of items per page. If you encounter api issues due to too large page sizes, try to reduce the limit}\n@returns(200) {data: [map], meta: map{page: int, limit: int, totalItems: int, totalPages: int}}\n@errors {400, 401, 403, 404, 405}\n\n@endpoint GET /api/public/score-configs/{configId}\n@desc Get a score config\n@required {configId: str # The unique langfuse identifier of a score config}\n@returns(200) {id: str, name: str, createdAt: str(date-time), updatedAt: str(date-time), projectId: str, dataType: str, isArchived: bool, minValue: num(double)?, maxValue: num(double)?, categories: [map]?, description: str?}\n@errors {400, 401, 403, 404, 405}\n\n@endpoint PATCH /api/public/score-configs/{configId}\n@desc Update a score config\n@required {configId: str # The unique langfuse identifier of a score config}\n@optional {isArchived: bool # The status of the score config showing if it is archived or not, name: str # The name of the score config, categories: [map{value!: num(double), label!: str}] # Configure custom categories for categorical scores. Pass a list of objects with `label` and `value` properties. Categories are autogenerated for boolean configs and cannot be passed, minValue: num(double) # Configure a minimum value for numerical scores. If not set, the minimum value defaults to -∞, maxValue: num(double) # Configure a maximum value for numerical scores. If not set, the maximum value defaults to +∞, description: str # Description is shown across the Langfuse UI and can be used to e.g. explain the config categories in detail, why a numeric range was set, or provide additional context on config name or usage}\n@returns(200) {id: str, name: str, createdAt: str(date-time), updatedAt: str(date-time), projectId: str, dataType: str, isArchived: bool, minValue: num(double)?, maxValue: num(double)?, categories: [map]?, description: str?}\n@errors {400, 401, 403, 404, 405}\n\n@endpoint GET /api/public/v2/scores\n@desc Get a list of scores (supports both trace and session scores)\n@optional {page: int # Page number, starts at 1., limit: int # Limit of items per page. If you encounter api issues due to too large page sizes, try to reduce the limit., userId: str # Retrieve only scores with this userId associated to the trace., name: str # Retrieve only scores with this name., fromTimestamp: str(date-time) # Optional filter to only include scores created on or after a certain datetime (ISO 8601), toTimestamp: str(date-time) # Optional filter to only include scores created before a certain datetime (ISO 8601), environment: [str] # Optional filter for scores where the environment is one of the provided values., source: str # Retrieve only scores from a specific source., operator: str # Retrieve only scores with  value., value: num(double) # Retrieve only scores with  value., scoreIds: str # Comma-separated list of score IDs to limit the results to., configId: str # Retrieve only scores with a specific configId., sessionId: str # Retrieve only scores with a specific sessionId., datasetRunId: str # Retrieve only scores with a specific datasetRunId., traceId: str # Retrieve only scores with a specific traceId., observationId: str # Comma-separated list of observation IDs to filter scores by., queueId: str # Retrieve only scores with a specific annotation queueId., dataType: str # Retrieve only scores with a specific dataType., traceTags: [str] # Only scores linked to traces that include all of these tags will be returned., fields: str # Comma-separated list of field groups to include in the response. Available field groups: 'score' (core score fields), 'trace' (trace properties: userId, tags, environment, sessionId). If not specified, both 'score' and 'trace' are returned by default. Example: 'score' to exclude trace data, 'score,trace' to include both. Note: When filtering by trace properties (using userId or traceTags parameters), the 'trace' field group must be included, otherwise a 400 error will be returned., filter: str # A JSON stringified array of filter objects. Each object requires type, column, operator, and value. Supports filtering by score metadata using the stringObject type. Example: [{\"type\":\"stringObject\",\"column\":\"metadata\",\"key\":\"user_id\",\"operator\":\"=\",\"value\":\"abc123\"}]. Supported types: stringObject (metadata key-value filtering), string, number, datetime, stringOptions, arrayOptions. Supported operators for stringObject: =, contains, does not contain, starts with, ends with.}\n@returns(200) {data: [any], meta: map{page: int, limit: int, totalItems: int, totalPages: int}}\n@errors {400, 401, 403, 404, 405}\n\n@endpoint GET /api/public/v2/scores/{scoreId}\n@desc Get a score (supports both trace and session scores)\n@required {scoreId: str # The unique langfuse identifier of a score}\n@returns(200)\n@errors {400, 401, 403, 404, 405}\n\n@endpoint GET /api/public/sessions\n@desc Get sessions\n@optional {page: int # Page number, starts at 1, limit: int # Limit of items per page. If you encounter api issues due to too large page sizes, try to reduce the limit., fromTimestamp: str(date-time) # Optional filter to only include sessions created on or after a certain datetime (ISO 8601), toTimestamp: str(date-time) # Optional filter to only include sessions created before a certain datetime (ISO 8601), environment: [str] # Optional filter for sessions where the environment is one of the provided values.}\n@returns(200) {data: [map], meta: map{page: int, limit: int, totalItems: int, totalPages: int}}\n@errors {400, 401, 403, 404, 405}\n\n@endpoint GET /api/public/sessions/{sessionId}\n@desc Get a session. Please note that `traces` on this endpoint are not paginated, if you plan to fetch large sessions, consider `GET /api/public/traces?sessionId=`\n@required {sessionId: str # The unique id of a session}\n@returns(200) {traces: [map]}\n@errors {400, 401, 403, 404, 405}\n\n@endpoint GET /api/public/traces/{traceId}\n@desc Get a specific trace\n@required {traceId: str # The unique langfuse identifier of a trace}\n@returns(200) {htmlPath: str, latency: num(double)?, totalCost: num(double)?, observations: [map], scores: [any]}\n@errors {400, 401, 403, 404, 405}\n\n@endpoint DELETE /api/public/traces/{traceId}\n@desc Delete a specific trace\n@required {traceId: str # The unique langfuse identifier of the trace to delete}\n@returns(200) {message: str}\n@errors {400, 401, 403, 404, 405}\n\n@endpoint GET /api/public/traces\n@desc Get list of traces\n@optional {page: int # Page number, starts at 1, limit: int # Limit of items per page. If you encounter api issues due to too large page sizes, try to reduce the limit., userId: str, name: str, sessionId: str, fromTimestamp: str(date-time) # Optional filter to only include traces with a trace.timestamp on or after a certain datetime (ISO 8601), toTimestamp: str(date-time) # Optional filter to only include traces with a trace.timestamp before a certain datetime (ISO 8601), orderBy: str # Format of the string [field].[asc/desc]. Fields: id, timestamp, name, userId, release, version, public, bookmarked, sessionId. Example: timestamp.asc, tags: [str] # Only traces that include all of these tags will be returned., version: str # Optional filter to only include traces with a certain version., release: str # Optional filter to only include traces with a certain release., environment: [str] # Optional filter for traces where the environment is one of the provided values., fields: str # Comma-separated list of fields to include in the response. Available field groups: 'core' (always included), 'io' (input, output, metadata), 'scores', 'observations', 'metrics'. If not specified, all fields are returned. Example: 'core,scores,metrics'. Note: Excluded 'observations' or 'scores' fields return empty arrays; excluded 'metrics' returns -1 for 'totalCost' and 'latency'., filter: str # JSON string containing an array of filter conditions. When provided, this takes precedence over query parameter filters (userId, name, sessionId, tags, version, release, environment, fromTimestamp, toTimestamp).  ## Filter Structure Each filter condition has the following structure: ```json [   {     \"type\": string,           // Required. One of: \"datetime\", \"string\", \"number\", \"stringOptions\", \"categoryOptions\", \"arrayOptions\", \"stringObject\", \"numberObject\", \"boolean\", \"null\"     \"column\": string,         // Required. Column to filter on (see available columns below)     \"operator\": string,       // Required. Operator based on type:                               // - datetime: \">\", \"=\", \"\", \"=\", \"\", \"=\", \"\"                               // - null: \"is null\", \"is not null\"     \"value\": any,             // Required (except for null type). Value to compare against. Type depends on filter type     \"key\": string             // Required only for stringObject, numberObject, and categoryOptions types when filtering on nested fields like metadata   } ] ```  ## Available Columns  ### Core Trace Fields - `id` (string) - Trace ID - `name` (string) - Trace name - `timestamp` (datetime) - Trace timestamp - `userId` (string) - User ID - `sessionId` (string) - Session ID - `environment` (string) - Environment tag - `version` (string) - Version tag - `release` (string) - Release tag - `tags` (arrayOptions) - Array of tags - `bookmarked` (boolean) - Bookmark status  ### Structured Data - `metadata` (stringObject/numberObject/categoryOptions) - Metadata key-value pairs. Use `key` parameter to filter on specific metadata keys.  ### Aggregated Metrics (from observations) These metrics are aggregated from all observations within the trace: - `latency` (number) - Latency in seconds (time from first observation start to last observation end) - `inputTokens` (number) - Total input tokens across all observations - `outputTokens` (number) - Total output tokens across all observations - `totalTokens` (number) - Total tokens (alias: `tokens`) - `inputCost` (number) - Total input cost in USD - `outputCost` (number) - Total output cost in USD - `totalCost` (number) - Total cost in USD  ### Observation Level Aggregations These fields aggregate observation levels within the trace: - `level` (string) - Highest severity level (ERROR > WARNING > DEFAULT > DEBUG) - `warningCount` (number) - Count of WARNING level observations - `errorCount` (number) - Count of ERROR level observations - `defaultCount` (number) - Count of DEFAULT level observations - `debugCount` (number) - Count of DEBUG level observations  ### Scores (requires join with scores table) - `scores_avg` (number) - Average of numeric scores (alias: `scores`) - `score_categories` (categoryOptions) - Categorical score values  ## Filter Examples ```json [   {     \"type\": \"datetime\",     \"column\": \"timestamp\",     \"operator\": \">=\",     \"value\": \"2024-01-01T00:00:00Z\"   },   {     \"type\": \"string\",     \"column\": \"userId\",     \"operator\": \"=\",     \"value\": \"user-123\"   },   {     \"type\": \"number\",     \"column\": \"totalCost\",     \"operator\": \">=\",     \"value\": 0.01   },   {     \"type\": \"arrayOptions\",     \"column\": \"tags\",     \"operator\": \"all of\",     \"value\": [\"production\", \"critical\"]   },   {     \"type\": \"stringObject\",     \"column\": \"metadata\",     \"key\": \"customer_tier\",     \"operator\": \"=\",     \"value\": \"enterprise\"   } ] ```  ## Performance Notes - Filtering on `userId`, `sessionId`, or `metadata` may enable skip indexes for better query performance - Score filters require a join with the scores table and may impact query performance}\n@returns(200) {data: [map], meta: map{page: int, limit: int, totalItems: int, totalPages: int}}\n@errors {400, 401, 403, 404, 405}\n\n@endpoint DELETE /api/public/traces\n@desc Delete multiple traces\n@required {traceIds: [str] # List of trace IDs to delete}\n@returns(200) {message: str}\n@errors {400, 401, 403, 404, 405}\n\n@end\n"}}