Zatomic API

Calculating a prompt score

NOTE: This is the endpoint for scoring a prompt stored outside of Zatomic. For the endpoint to score a prompt stored within Zatomic, see this endoint.

Calculates the score for a prompt. A successful call returns a response that contains the scoring object.

The request requires the content for the prompt and an optional use_case. This endpoint uses the default system criteria to perform the scoring analysis.

You can also add a settings object to the request that specifies which AI model and provider you want to use for the scoring. If settings is given in the request, the model_source and model_id are required.

The model_source field specifies where the model comes from. When using models from your own AI providers, use the value provider; otherwise, use zatomic.

If provider_id is given and the provider is for Amazon Bedrock, then the aws_region is required and must be the region where the model is located.

You can find model IDs in the model catalog and provider IDs in your Zatomic account.

Endpoint
POST https://api.zatomic.ai/v1/prompts/scoring

Request Properties
content
string
The prompt content.
use_case
string, optional
The use case for the prompt. Recommended to improve analysis.
settings
object, optional

Properties for the object:

model_source
string
The source of the model.
model_id
string
The ID of the AI model to use for the scoring.
provider_id
string, optional
The ID of the AI provider that contains the model to use for scoring.
aws_region
string, optional
The AWS region where the model resides. Required if the given provider is Amazon Bedrock.
Request Body
{
   "content": "The prompt content.",
   "use_case": "Use case for the prompt.",
   "settings": {
      "model_source": "zatomic|provider",
      "model_id": "aim_2y2eRWI32fN0CB7a5wE7RuvhVMv"
      "provider_id": "aap_2zFxUYe3RINnOr37VQwHDFF3gK3",
      "aws_region": "us-east-1"
   }
}