POST https://api.zatomic.ai/v1/prompts/scoring/criteria/generate
POST https://api.zatomic.ai/v1/prompts/scoring/criteria/{criteriaId}/generate
// Example
POST https://api.zatomic.ai/v1/prompts/scoring/criteria/sca_2rjp9HFpIsiYQrAiSbZlz85r3GC/generate
Zatomic API
- Introduction
- Versioning
- Authentication
- Workspaces
- Status codes and errors
- Token usage
- Expanding objects
- OpenAPI spec
- Prompts
- The Prompt object
- Creating a prompt
- Updating a prompt
- Deleting a prompt
- Retrieving a prompt
- Retrieving all prompts
- Generating a prompt
- Versions
- The Version object
- Creating a version
- Updating a version
- Deleting a version
- Retrieving a version
- Retrieving all versions
- Retrieving a version score
- Calculating a version score
- Retrieving a version risk
- Analyzing a version risk
- Retrieving a version balance
- Analyzing a version balance
- Retrieving a version heatmap
- Generating a version heatmap
- Improving a version
- Scoring Criteria
- The Scoring Criteria object
- The Scoring Criterion object
- Creating scoring criteria
- Updating a scoring criteria
- Deleting a scoring criteria
- Retrieving a scoring criteria
- Retrieving all scoring criteria
- Generating scoring criteria
- Creating a scoring criterion
- Updating a scoring criterion
- Deleting a scoring criterion
- Retrieving a scoring criterion
- Scoring Criteria Results
- The Scoring Criteria Results object
- Scoring
- The Scoring object
- Calculating a prompt score
- Risk
- The Risk object
- Analyzing prompt risk
- Balance
- The Balance object
- Analyzing prompt balance
- Heatmaps
- The Heatmap object
- Generating a prompt heatmap
Generating scoring criteria
These endpoints generate scoring criteria based on a use case, which can then be used for prompt scoring. The first endpoint requires a use_case as part of the request, whereas the second endpoint will utilize the use case already assocated with the scoring criteria.
For the second endpoint, the generated criterion will be different from any criterion that already exists in the scoring criteria.
The responses for both endpoints are the same. The list of criterion returned for the first endpoint can be used as input to create scoring criteria, while the list of criterion returned from the second endpoint can be used as input to add criterion to the existing scsoring criteria.
You can also add a settings object to the request that specifies which AI model and provider you want to use to generate the scoring criteria. If settings is given in the request, the model_source and model_id are required.
The model_source field specifies where the model comes from. When using models from your own AI providers, use the value provider; otherwise, use zatomic.
If provider_id is given and the provider is for Amazon Bedrock, then the aws_region is required and must be the region where the model is located.
You can find model IDs in the model catalog and provider IDs in your Zatomic account.
Request Properties | |||||||||
---|---|---|---|---|---|---|---|---|---|
use_case
string
|
Use case to generate scoring criteria. Only applies to the first endpoint. | ||||||||
settings
object, optional
|
Properties for the object:
|
{
"use_case": "Use case for the criteria.",
"settings": {
"model_source": "zatomic|provider",
"model_id": "aim_2y2eRWI32fN0CB7a5wE7RuvhVMv"
"provider_id": "aap_2zFxUYe3RINnOr37VQwHDFF3gK3",
"aws_region": "us-east-1"
}
}
Response Properties | |||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
criterion_set
list of criterion objects
|
Properties for the criterion object:
|
{
"criterion_set": [
{
"slug": "criterion_slug",
"label": "The label of the criterion.",
"description": "The criterion description.",
"questions": "The question or questions the criterion is trying to answer.",
"weight": 0
}
]
}