Zatomic API

Generate Prompt

You can generate a prompt by sending in a use case description to this endpoint. You can then use the generated prompt as content to create a new prompt or a specific prompt version.

A successful call returns a response with an auto-generated content property in Markdown format.

This endpoint also supports an optional criteria_id, which will ensure the prompt generation utilizes that set of criteria. To get the list of criteria and their IDs, use the scoring criteria list endpoint.

You can also add a settings object to the request that specifies which AI model and provider you want to use generate the prompt. If settings is given in the request, the model_source and model_id are required.

The model_source field specifies where the model comes from. When using models from your own AI providers, use the value provider; otherwise, use zatomic.

If provider_id is given and the provider is for Amazon Bedrock, then the aws_region is required and must be the region where the model is located.

For models GPT-5, GPT-5 Mini, and GPT-5 Nano, the temperature setting is invalid. If a temperature value is given when using one of those models, it will be ignored.

You can find model IDs in the model catalog and provider IDs in your Zatomic account.

Endpoint
POST https://api.zatomic.ai/v1/prompts/generate

Endpoint Request

Request Properties
use_case
string
Use case description for the prompt.
criteria_id
string, optional
The ID of the criteria to use in conjunction with the use case.
settings
object, optional

Properties for the object:

model_source
string
The source of the model.
model_id
string
The ID of the AI model to use for the scoring.
provider_id
string, optional
The ID of the AI provider that contains the model to use for scoring.
aws_region
string, optional
The AWS region where the model resides. Required if the given provider is Amazon Bedrock.
temperature
number, optional
The temperature for the model; must be between 0 and 1. Ignored for models GPT-5, GPT-5 Mini, GPT-5 Nano.
Request Body
{
   "use_case": "Use case description.",
   "criteria_id": "sca_2rjp9HFpIsiYQrAiSbZlz85r3GC",
   "settings": {
      "model_source": "zatomic|provider",
      "model_id": "aim_2y2eRWI32fN0CB7a5wE7RuvhVMv"
      "provider_id": "aap_2zFxUYe3RINnOr37VQwHDFF3gK3",
      "aws_region": "us-east-1",
      "temperature": 0.75
   }
}

Endpoint Response

A successful call returns a response that contains the content for the generated prompt.

HTTP Status Codes
200 OK The prompt was generated.
400 Bad Request The use case was not provided in the request.
400 Bad Request If settings is given in the request but does not contain a model source.
400 Bad Request If settings is given in the request but does not contain a model ID.
400 Bad Request If a model is given in the settings but it is invalid.
400 Bad Request If model source is "provider" in the settings but does not contain a provider ID.
400 Bad Request If a provider is given in the settings but it is invalid.
400 Bad Request If Amazon Bedrock provider is given in the settings but does not contain an AWS region.
400 Bad Request If the temperature is given in the settings but is not in the range of 0 to 1.
403 Forbidden The prompt action limit has been reached for the account.
500 Internal Server Error Something went wrong on Zatomic's end.
Response Properties
content
string
The content of the generated prompt. Will be in Markdown format.
Response Body
{
   "content": "You are a knowledgeable and friendly assistant..."
}