When your prompt is ready for production, you can easily integrate it into your system.Asserto functions as a CMS for your prompts, separating prompt content from your application code. Once integrated, you can push prompt updates without the need to redeploy your system—keeping your production workflow lean and flexible.
Once you have the prompt messages and configuration, you can render them and send the request using the OpenAI API.Here’s an example in python using chevron, a Mustache templating library:
Copy
from openai import OpenAIfrom chevron import renderer# Set OPENAI_API_KEY. See https://platform.openai.com/docs/quickstartclient = OpenAI()# TODO: Replace with valuesdata = { "review": None,}messages = [ { "role": "system", "content": {"type": "text", "text": "You are a sentiment classifier.\nClassify the following movie review as 'positive' or 'negative'.\nRespond in JSON format:\n{\"sentiment\": \"<result>\"}\n"} }, { "role": "user", "content": {"type": "text", "text": "Review: {{review}}"} },]rendered_messages = []for msg in messages: msg['content'] = renderer.render(msg['content']['text'], data) rendered_messages.append(msg)response = client.chat.completions.create( model="gpt-4o-mini", messages=rendered_messages, temperature=0.1,)
By combining this with the prompt API, your system can stay in sync with content changes without code changes or redeployments.