POST https://api.zatomic.ai/v1/prompts/generate
Zatomic API
- Introduction
- Versioning
- Authentication
- Workspaces
- Status codes and errors
- Token usage
- Expanding objects
- OpenAPI spec
- Prompts
- The Prompt object
- Creating a prompt
- Updating a prompt
- Deleting a prompt
- Retrieving a prompt
- Retrieving all prompts
- Generating a prompt
- Versions
- The Version object
- Creating a version
- Updating a version
- Deleting a version
- Retrieving a version
- Retrieving all versions
- Retrieving a version score
- Calculating a version score
- Retrieving a version risk
- Analyzing a version risk
- Retrieving a version balance
- Analyzing a version balance
- Retrieving a version heatmap
- Generating a version heatmap
- Improving a version
- Scoring Criteria
- The Scoring Criteria object
- The Scoring Criterion object
- Creating scoring criteria
- Updating a scoring criteria
- Deleting a scoring criteria
- Retrieving a scoring criteria
- Retrieving all scoring criteria
- Generating scoring criteria
- Creating a scoring criterion
- Updating a scoring criterion
- Deleting a scoring criterion
- Retrieving a scoring criterion
- Scoring Criteria Results
- The Scoring Criteria Results object
- Scoring
- The Scoring object
- Calculating a prompt score
- Risk
- The Risk object
- Analyzing prompt risk
- Balance
- The Balance object
- Analyzing prompt balance
- Heatmaps
- The Heatmap object
- Generating a prompt heatmap
Generating a prompt
You can generate a prompt by sending in a use case description to this endpoint. You can then use the generated prompt as content to create a new prompt or a specific prompt version.
A successful call returns a response with an auto-generated content property in Markdown format.
This endpoint also supports an optional criteria_id, which will ensure the prompt generation utilizes that set of criteria. To get the list of criteria and their IDs, use the scoring criteria list endpoint.
You can also add a settings object to the request that specifies which AI model and provider you want to use generate the prompt. If settings is given in the request, the model_source and model_id are required.
The model_source field specifies where the model comes from. When using models from your own AI providers, use the value provider; otherwise, use zatomic.
If provider_id is given and the provider is for Amazon Bedrock, then the aws_region is required and must be the region where the model is located.
You can find model IDs in the model catalog and provider IDs in your Zatomic account.
Request Properties | |||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|
use_case
string
|
Use case description for the prompt. | ||||||||||
criteria_id
string, optional
|
The ID of the criteria to use in conjunction with the use case. | ||||||||||
settings
object, optional
|
Properties for the object:
|
{
"use_case": "Use case description.",
"criteria_id": "sca_2rjp9HFpIsiYQrAiSbZlz85r3GC",
"settings": {
"model_source": "zatomic|provider",
"model_id": "aim_2y2eRWI32fN0CB7a5wE7RuvhVMv"
"provider_id": "aap_2zFxUYe3RINnOr37VQwHDFF3gK3",
"aws_region": "us-east-1",
"temperature": 0.75
}
}
Response Properties | |
---|---|
content
string
|
The content of the generated prompt. Will be in Markdown format. |
{
"content": "You are a knowledgeable and friendly assistant..."
}