Perplexity often returns short answers that do not cover the depth you need. This happens because the default response length is optimized for speed and conciseness. This article explains how to adjust your search settings and prompts to get longer, more detailed responses from Perplexity without losing accuracy.
You will learn the specific settings that control answer length and the exact phrasing to use in your queries. The guide covers both the web interface and mobile app steps. By the end, you will be able to consistently request and receive comprehensive answers that match your research or work requirements.
Key Takeaways: How to Get Longer Answers in Perplexity
- Focus setting (Web, Academic, Writing, Math): The Writing focus mode produces the longest, most elaborate responses by default.
- Pro search toggle: Enabling Pro search uses more computational resources and often yields more detailed answers.
- Prompt phrasing: Adding phrases like “give a detailed explanation” or “in at least 500 words” directly influences response length.
How Perplexity Determines Answer Length
Perplexity uses a language model that generates responses based on the context window and the system prompt. The default system prompt instructs the model to be concise. This means that without explicit user direction, Perplexity will produce brief summaries rather than exhaustive explanations.
The answer length is also affected by the selected focus mode. Each focus mode has a different system instruction. The Writing mode, for example, is designed for content creation and thus allows for longer, more narrative responses. The Web mode prioritizes quick, factual answers from search results.
Another factor is the Pro search toggle. When Pro search is on, Perplexity can perform deeper reasoning and retrieve more sources, which naturally leads to longer and more thorough responses. The free tier has a token limit per response that restricts length, but Pro subscribers have a higher token cap.
Understanding Token Limits and Context Windows
Tokens are the basic units of text that the language model processes. A token is roughly four characters or 0.75 words. The free version of Perplexity has a response token limit of approximately 1500 tokens, which translates to about 1100 words. Pro subscribers can receive up to 3000 tokens per response, or around 2200 words.
The context window is the total amount of text the model can consider at once, including your question and the response. Perplexity uses a context window of 8,192 tokens for free users and 32,768 tokens for Pro users. A larger context window allows the model to generate longer responses without cutting off.
Steps to Request Longer Responses in Perplexity
Follow these steps to increase the length of answers you receive. The methods work on both the web app and the mobile app.
- Switch to Writing Focus Mode
Open Perplexity and locate the focus mode selector below the search bar. Click or tap the current focus label, which is usually set to Web by default. Select Writing from the list. This mode is optimized for generating longer, more detailed text. It does not prioritize search results as heavily, so the model has more freedom to expand on the topic. - Enable Pro Search
If you have a Pro subscription, toggle the Pro search switch on. It is located next to the search bar on the web app and above the keyboard on the mobile app. Pro search uses a more powerful model and allocates more tokens to the response. This results in answers that are often 50% to 100% longer than the free version. - Use Explicit Length Instructions in Your Prompt
Type your question and add a specific length request. Examples include: “Explain quantum computing in detail, at least 500 words,” “Write a comprehensive guide on setting up a VPN, covering all steps,” or “Describe the history of the Roman Empire in 1000 words.” The model follows these instructions reliably. Avoid vague requests like “make it longer” because the model may interpret that inconsistently. - Ask for a Structured Breakdown
Request that the answer be organized into sections. For instance, say: “Provide a detailed answer broken into sections: Introduction, Key Concepts, Examples, and Conclusion.” Structured prompts encourage the model to expand each part, naturally increasing the total length. This also improves the readability of the response. - Follow Up with a Continuation Request
If the initial response is still too short, type “Continue” or “Expand on the third point in more detail.” Perplexity treats this as a new query within the same conversation, and the model will generate additional text. You can repeat this multiple times to build a very long answer across several exchanges.
Common Mistakes and Limitations When Requesting Longer Answers
Even with the correct settings, you may encounter responses that are shorter than expected. Understanding these limitations helps you adjust your approach.
Perplexity Still Returns Short Answers After Switching to Writing Mode
If you switch to Writing mode but still get brief responses, the issue is likely your prompt. The Writing mode does not automatically produce long answers for every query. You must explicitly ask for a detailed response. Add a phrase like “in detail” or “with examples” to your question. Also verify that Pro search is enabled if you are a subscriber, as the free tier still limits output length.
The Response Gets Cut Off Mid-Sentence
A cut-off response indicates that the model reached the token limit for your plan. Free users have a lower limit, so long requests may be truncated. Upgrade to Pro if you consistently need very long answers. Alternatively, break your question into smaller parts and ask for each part separately. Then combine the responses manually.
Longer Answers Are Less Accurate or Include Hallucinations
Forcing the model to generate more text can sometimes lead to less accurate information. The model may fabricate details to fill space. To reduce this risk, ask for answers that cite sources. Use the phrase “with citations from reliable sources” in your prompt. Perplexity will then include references for the claims, which you can verify. This keeps the long answer grounded in facts.
Mobile App Does Not Show the Writing Mode Option
The mobile app may have a slightly different interface. If you cannot find the focus mode selector, tap the search bar to bring up the keyboard. Look for a small icon that looks like a gear or a slider above the keyboard. Tap it to reveal the focus mode options. If the option is still missing, update the app to the latest version from your device’s app store.
Perplexity Free vs Pro: Response Length and Capabilities
| Item | Free Tier | Pro Tier |
|---|---|---|
| Response token limit | ~1,500 tokens (~1,100 words) | ~3,000 tokens (~2,200 words) |
| Context window size | 8,192 tokens | 32,768 tokens |
| Pro search toggle | Not available | Available, increases response length |
| Focus modes | All modes available | All modes available |
| Best method for length | Writing mode + explicit prompt | Pro search + Writing mode + explicit prompt |
Free users can still get longer answers by using the Writing mode and crafting detailed prompts. The maximum length is capped at around 1100 words per response. Pro users can exceed 2000 words per response and can use the Pro search toggle for even more depth. If you need very long answers regularly, the Pro tier is the practical choice.
Now you can control the length of Perplexity answers by switching to Writing mode, enabling Pro search, and using explicit length instructions in your prompts. Start with the Writing mode and a clear request for detail. If you need more, follow up with a continuation command. For the longest responses, consider upgrading to Pro and using the Pro search toggle. This approach gives you consistent, comprehensive answers tailored to your needs.