When you make API calls to Perplexity, you may need to set the behavior of the AI model before it processes user messages. A system prompt defines the role, tone, and constraints for the assistant. Without a correctly placed system prompt, the model may respond in a default manner that does not match your application’s needs. This article explains how to structure your API request to include system prompts and shows the correct JSON format for the Perplexity API.
Key Takeaways: Passing System Prompts to the Perplexity API
- messages array with role “system”: Place your system prompt as the first object in the messages array with role set to “system”.
- role order matters: The system message must come before any user or assistant messages in the array.
- API endpoint POST /chat/completions: Send the JSON payload to this endpoint with the model parameter set, for example “sonar-pro”.
What Is a System Prompt and Why Pass It in API Calls
A system prompt is a message that sets the context and rules for the AI assistant. In Perplexity’s API, you provide this prompt as a message object with the role system. The model reads this instruction before it sees any user input. For example, you can instruct the assistant to answer only in French, to act as a technical support agent, or to cite sources from a specific date range.
Passing a system prompt is necessary when you want consistent behavior across many API calls. Without it, the model uses a default persona that may not match your use case. The Perplexity API follows the same chat completion format as OpenAI, so the messages array structure is familiar to most developers.
Steps to Structure the API Request With a System Prompt
Follow these steps to include a system prompt in a Perplexity API call. The example uses the sonar-pro model, but the same structure works for other models available through the API.
- Set the request URL and method
Use the endpointhttps://api.perplexity.ai/chat/completionswith the HTTP POST method. Include your API key in theAuthorizationheader asBearer YOUR_API_KEY. - Build the JSON body with the model parameter
Add themodelkey and set it to the model name, for example"sonar-pro". This tells Perplexity which model to use for the request. - Create the messages array
Add amessagesarray. The first object in this array must have"role": "system"and thecontentfield containing your prompt text. - Add the user message after the system message
Append a second object with"role": "user"and thecontentfield for the user’s query. The order of objects in the array matters: system first, then user, then any assistant messages if you are continuing a conversation. - Send the request and parse the response
Make the POST request. The response contains achoicesarray with an object that has amessagefield. The assistant’s reply is inmessage.content.
Example JSON Payload
Below is a complete JSON payload for a Perplexity API call with a system prompt. The system prompt tells the model to answer in plain English and to provide citations when possible.
{
"model": "sonar-pro",
"messages": [
{
"role": "system",
"content": "You are a helpful assistant. Always answer in plain English and include citations from reliable sources when available."
},
{
"role": "user",
"content": "What is the capital of France?"
}
]
}
Verifying the System Prompt Was Applied
After you receive the response, check that the assistant’s behavior matches your system prompt. In the example above, the model should answer “Paris” and include a citation from a geography or government source. If the answer does not follow the prompt, review the order of the messages array. A common mistake is placing the user message before the system message.
Common Mistakes When Passing System Prompts
The API Returns a 400 Bad Request Error
This error often means the JSON payload is malformed or the messages array is empty. Check that the messages array contains at least one object and that each object has both role and content fields. Also confirm that the model field is present and spelled correctly.
The System Prompt Has No Effect on the Output
If the model ignores your system prompt, verify that the system message is the first item in the messages array. If you place the user message first, the model may treat the system prompt as a secondary instruction or ignore it entirely. Also ensure the system prompt is not too long — the API accepts prompts up to the model’s context limit, which is typically 200,000 tokens for sonar-pro.
System Prompt Overrides User Intent
A system prompt that is too restrictive can prevent the model from answering user questions correctly. For example, if you set a system prompt that says “Only answer in one word,” the model may fail to provide useful responses for complex queries. Keep system prompts clear and aligned with the expected user interactions.
Perplexity API System Prompt vs No System Prompt: Behavior Comparison
| Item | With System Prompt | Without System Prompt |
|---|---|---|
| Response style | Follows the tone and rules you define | Uses default assistant persona |
| Citation behavior | Can be forced or disabled via prompt | May cite sources inconsistently |
| Language control | You can set a specific language | Responds in the language of the user query |
| Role enforcement | Works well for domain-specific tasks | No role enforcement |
The table above shows the key differences between API calls that include a system prompt and those that do not. For most production applications, a system prompt is recommended to ensure consistent and predictable model behavior.
You can now pass system prompts in Perplexity API calls by placing a message object with role system as the first item in the messages array. Test your payload with a simple user query before deploying to production. For advanced use cases, try adding parameters like temperature or max_tokens in the same JSON body to further control the model output. Remember that the system prompt is not encrypted in transit — avoid including sensitive data such as passwords or API keys in the prompt content.