In this quickstart we will create a prompt for classifying online reviews of movies as either
positive or negative.
Watch our quickstart demo to see how to create and test your first prompt in Asserto AI
1. Create Your First Prompt
- Navigate to Prompt list page (https://app.asserto.ai/app/prompt)
- Set name to
Movie Review Sentiment
2. Define the Prompt
In the left panel of the dashboard, you can define the messages that make up the prompt.`system` message
Define the LLM’s role and instructions. Specify the expected output format to make the response machine-readable.
3. Configure the Prompt
In the right-side panel, go to the Config tab. Here you’ll configure how the LLM runs and define the inputs and outputs.Model Configuration
- Select a model (e.g.,
gpt-4,claude, etc.) - Set temperature to
0for consistent deterministic outputs - Ensure response format is set to
JSON
Define Prompt Parameters
- Add a parameter named
review - Set its type to
textto allow multi-line input - This corresponds to the
{{review}}placeholder in your prompt
Define the Output Schema
- Add a result field named
sentiment - This matches the key in the expected LLM JSON response
- Used for validation and test automation
4. Run Prompt in Playground
On the right-side panel, switch to the Playground tab. Here you can run the prompt manually, inspect the LLM response, and make prompt edits.Run and Inspect
- Click Submit
- Review the output JSON response
- Iterate on prompt instructions if necessary for better accuracy
5. Add a Test Case
On the right-side panel, switch to the Test Cases tab. Test Cases allow you to define fixed inputs and expected outputs that can be validated automatically. This is useful for regression testing and prompt versioning.Add Test Case
- Click Add Case
- Fill the name:
positive 1 - For parameter values, set
reviewto:
- Click Save
Add Assertion
- After saving the test case, click Add Assertion
- Select property
sentimentas the output field to check - Set the operation to
equal - Set Expected text output to
positive - Save the assertion
6. Run Test Cases
On the right-side panel, switch to the Test Runner tab to validate your prompt automatically using the test cases you’ve defined.Run All Tests
Click the Run All button to execute all your test cases.Each test case will run the LLM with the given input and compare the output against the expected assertion(s).
Set API Key (if required)
If this is your first time running tests or if the API key is missing/expired:
- You’ll be prompted to set the API key for your LLM provider (e.g., OpenAI, Anthropic)
- You can find this in the Settings panel if you need to update it later
7. Next Steps
You now have a working prompt with automated test coverage! Here’s what you can do next:- Try your prompt with different models or settings (e.g., temperature, top-p)
- Add more test cases to cover edge cases and ensure consistency
- Add negative cases, ambiguous inputs, or malformed reviews
- Explore different assertion types beyond
equal— e.g.,contains,regex, `in list- Modify the prompt instructions to improve classification accuracy
8. Dive Deeper
- Learn about prompt versioning
- Learn about deployment options to integrate the prompts in your system
- Learn more about test cases and assertion types