POST https://api.zatomic.ai/v1/prompts/{promptId}/versions/{versionId}/scoring
// Example
POST https://api.zatomic.ai/v1/prompts/prm_2qRzu8geIvfudcJTwP0pur4TbMJ/versions/ver_2qRzu8qzlNOMhTrini2EKCDh5r6/scoring
Zatomic API
- Introduction
- Versioning
- Authentication
- Workspaces
- Status codes and errors
- Token usage
- Expanding objects
- OpenAPI spec
- Prompts
- The Prompt object
- Creating a prompt
- Updating a prompt
- Deleting a prompt
- Retrieving a prompt
- Retrieving all prompts
- Generating a prompt
- Versions
- The Version object
- Creating a version
- Updating a version
- Deleting a version
- Retrieving a version
- Retrieving all versions
- Retrieving a version score
- Calculating a version score
- Retrieving a version risk
- Analyzing a version risk
- Retrieving a version balance
- Analyzing a version balance
- Retrieving a version heatmap
- Generating a version heatmap
- Improving a version
- Scoring Criteria
- The Scoring Criteria object
- The Scoring Criterion object
- Creating scoring criteria
- Updating a scoring criteria
- Deleting a scoring criteria
- Retrieving a scoring criteria
- Retrieving all scoring criteria
- Generating scoring criteria
- Creating a scoring criterion
- Updating a scoring criterion
- Deleting a scoring criterion
- Retrieving a scoring criterion
- Scoring Criteria Results
- The Scoring Criteria Results object
- Scoring
- The Scoring object
- Calculating a prompt score
- Risk
- The Risk object
- Analyzing prompt risk
- Balance
- The Balance object
- Analyzing prompt balance
- Heatmaps
- The Heatmap object
- Generating a prompt heatmap
Calculating a version score
Calculates the score for a specific version of a prompt. A successful call returns a response that contains the scoring object.
The request requires the ID of the criteria that you want to use for scoring. To get the list of criteria with their IDs and criterion slugs, use the scoring criteria list endpoint.
You can also add a settings object to the request that specifies which AI model and provider you want to use for the scoring. If settings is given in the request, the model_source and model_id are required.
The model_source field specifies where the model comes from. When using models from your own AI providers, use the value provider; otherwise, use zatomic.
If provider_id is given and the provider is for Amazon Bedrock, then the aws_region is required and must be the region where the model is located.
You can find model IDs in the model catalog and provider IDs in your Zatomic account.
Request Properties | |||||||||
---|---|---|---|---|---|---|---|---|---|
criteria_id
string
|
The ID of the criteria to use for scoring. | ||||||||
criterion_slugs
list of strings, optional
|
The list of criterion slugs from the criteria. If none are given, then all criterion from the criteria will be used. | ||||||||
settings
object, optional
|
Properties for the object:
|
{
"criteria_id": "sca_2rjp9HFpIsiYQrAiSbZlz85r3GC",
"criterion_slugs": ["slug_1", "slug_2", "slug_3"],
"settings": {
"model_source": "zatomic|provider",
"model_id": "aim_2y2eRWI32fN0CB7a5wE7RuvhVMv"
"provider_id": "aap_2zFxUYe3RINnOr37VQwHDFF3gK3",
"aws_region": "us-east-1"
}
}