Quickstart – Test LLM Prompts Structured Output with Asserto AI
Create and test a prompt in 5 minutes
In this quickstart we will create a prompt for classifying online reviews of movies as either positive
or negative
.
1. Create Your First Prompt
- Navigate to Prompt list page (https://app.asserto.ai/app/prompt)
- Set name to
Movie Review Sentiment
You will see the Prompt Development Dashboard.
2. Define the Prompt
In the left panel of the dashboard, you can define the messages that make up the prompt.
Step 1: system
message
Define the LLM’s role and instructions. Specify the expected output format to make the response machine-readable.
Step 2: user
message
Write a template for the user input using a placeholder for the review text.
Templates use Mustache syntax — wrap parameter names in double curly braces ({{ }}
).
3. Configure the Prompt
In the right-side panel, go to the Config tab. Here you’ll configure how the LLM runs and define the inputs and outputs.
Step 1: Model Configuration
- Select a model (e.g.,
gpt-4
,claude
, etc.) - Set temperature to
0
for consistent deterministic outputs - Ensure response format is set to
JSON
Step 2: Define Prompt Parameters
- Add a parameter named
review
- Set its type to
text
to allow multi-line input - This corresponds to the
{{review}}
placeholder in your prompt
Step 3: Define the Output Schema
- Add a result field named
sentiment
- This matches the key in the expected LLM JSON response
- Used for validation and test automation
Step 4: Try a Negative Input
You can try this input in the playground later:
4. Run Prompt in Playground
On the right-side panel, switch to the Playground tab.
Here you can run the prompt manually, inspect the LLM response, and make prompt edits.
Step 1: Add Sample Input
Provide a sample value for the review
parameter. For example:
Positive review
Step 2: Run and Inspect
- Click Submit
- Review the output JSON response
- Iterate on prompt instructions if necessary for better accuracy
Step 3: Repeat with Another Input
Try different inputs to validate how well the prompt handles variations.
5. Add a Test Case
On the right-side panel, switch to the Test Cases tab.
Test Cases allow you to define fixed inputs and expected outputs that can be validated automatically. This is useful for regression testing and prompt versioning.
Step 1: Add Test Case
- Click Add Case
- Fill the name:
positive 1
- For parameter values, set
review
to:
- Click Save
Step 2: Add Assertion
- After saving the test case, click Add Assertion
- Select property
sentiment
as the output field to check - Set the operation to
equal
- Set Expected text output to
positive
- Save the assertion
You now have a fully defined test case that will automatically validate that the prompt produces the expected output for the given input.
Repeat the steps to add a negative test case as well.
6. Run Test Cases
On the right-side panel, switch to the Test Runner tab to validate your prompt automatically using the test cases you’ve defined.
Step 1: Run All Tests
Click the Run All button to execute all your test cases.
Each test case will run the LLM with the given input and compare the output against the expected assertion(s).
Step 2: Set API Key (if required)
If this is your first time running tests or if the API key is missing/expired:
- You’ll be prompted to set the API key for your LLM provider (e.g., OpenAI, Anthropic)
- You can find this in the Settings panel if you need to update it later
Step 3: Review Results
- The test run session will show the pass/fail status of each test
- For failures, you can click to inspect the actual vs expected outputs
- Use this feedback to iterate on your prompt or test assertions
7. Next Steps
You now have a working prompt with automated test coverage! Here’s what you can do next:
🔁 Iterate and Expand
- Try your prompt with different models or settings (e.g., temperature, top-p)
- Add more test cases to cover edge cases and ensure consistency
- Modify the prompt instructions to improve classification accuracy
🧪 Strengthen Test Coverage
- Add negative cases, ambiguous inputs, or malformed reviews
- Explore different assertion types beyond
equal
— e.g.,contains
,regex
,in list
🚀 Deploy and Integrate
- Learn how to version your prompts to track improvements over time
- Explore deployment options to integrate your prompts via API or SDK
- Use environments to manage separate dev/staging/prod configurations
8. Dive Deeper
- Learn about prompt versioning
- Learn about deployment options to integrate the prompts in your system
- Learn more about test cases and assertion types