POST https://api.zatomic.ai/v1/prompts/scoring
Zatomic API
- Overview
- Versioning
- Authentication
- Workspaces
- Status Codes and Errors
- Token Usage
- Expanding Objects
- OpenAPI Spec
- Changelog
- Projects
- The Project Object
- Create Project
- Update Project
- Delete Project
- Retrieve Project
- Retrieve All Projects
- Prompts
- The Prompt Object
- Create Prompt
- Update Prompt
- Delete Prompt
- Retrieve Prompt
- Retrieve All Prompts
- Generate Prompt
- Versions
- The Version Object
- Create Version
- Update Version
- Delete Version
- Retrieve Version
- Retrieve All Versions
- Calculate Version Score
- Retrieve Version Score
- Generate Version Heatmap
- Retrieve Version Heatmap
- Analyze Version Balance
- Retrieve Version Balance
- Analyze Version Risk
- Retrieve Version Risk
- Improve Version
- Scoring Criteria
- The Scoring Criteria Object
- The Scoring Criterion Object
- Create Scoring Criteria
- Update Scoring Criteria
- Delete Scoring Criteria
- Retrieve Scoring Criteria
- Retrieve All Scoring Criteria
- Generate Scoring Criteria
- Create Scoring Criterion
- Update Scoring Criterion
- Delete Scoring Criterion
- Retrieve Scoring Criterion
- Scoring Criteria Results
- The Scoring Criteria Results Object
- Scoring
- The Scoring Object
- Calculate Prompt Score
- Heatmaps
- The Heatmap Object
- Generate Prompt Heatmap
- Balance
- The Balance Object
- Analyze Prompt Balance
- Risk
- The Risk Object
- Analyze Prompt Risk
Calculate Prompt Score
NOTE: This is the endpoint for scoring a prompt stored outside of Zatomic. For the endpoint to score a prompt stored within Zatomic, see this endoint.
Calculates the score for a prompt.
The request requires the content for the prompt and an optional use_case. This endpoint uses the default system criteria to perform the scoring analysis.
You can also add a settings object to the request that specifies which AI model and provider you want to use for the scoring. If settings is given in the request, the model_source and model_id are required.
The model_source field specifies where the model comes from. When using models from your own AI providers, use the value provider; otherwise, use zatomic.
If provider_id is given and the provider is for Amazon Bedrock, then the aws_region is required and must be the region where the model is located.
You can find model IDs in the model catalog and provider IDs in your Zatomic account.
Endpoint Request
Request Properties | |||||||||
---|---|---|---|---|---|---|---|---|---|
content
string
|
The prompt content. | ||||||||
use_case
string, optional
|
The use case for the prompt. Recommended to improve analysis. | ||||||||
settings
object, optional
|
Properties for the object:
|
{
"content": "The prompt content.",
"use_case": "Use case for the prompt.",
"settings": {
"model_source": "zatomic|provider",
"model_id": "aim_2y2eRWI32fN0CB7a5wE7RuvhVMv"
"provider_id": "aap_2zFxUYe3RINnOr37VQwHDFF3gK3",
"aws_region": "us-east-1"
}
}
Endpoint Response
A successful call returns a response that contains the scoring object.
HTTP Status Codes | ||
---|---|---|
200 | OK | The prompt was scored. |
400 | Bad Request | The content was not provided in the request. |
400 | Bad Request | If settings is given in the request but does not contain a model source. |
400 | Bad Request | If settings is given in the request but does not contain a model ID. |
400 | Bad Request | If a model is given in the settings but it is invalid. |
400 | Bad Request | If model source is "provider" in the settings but does not contain a provider ID. |
400 | Bad Request | If a provider is given in the settings but it is invalid. |
400 | Bad Request | If Amazon Bedrock provider is given in the settings but does not contain an AWS region. |
403 | Forbidden | The prompt action limit has been reached for the account. |
500 | Internal Server Error | Something went wrong on Zatomic's end. |