How to Use Perplexity API With Node.js Client
🔍 WiseChecker

How to Use Perplexity API With Node.js Client

You want to integrate Perplexity’s search and AI capabilities into your Node.js application. The Perplexity API provides programmatic access to the same large language models that power the Perplexity web app, enabling you to build custom search tools, chatbots, and automation workflows. This article explains how to set up a Node.js client, authenticate your requests, and send queries to the API endpoints.

Key Takeaways: Setting Up the Perplexity API in Node.js

  • API Key from perplexity.ai/settings: Required for all requests; treat it like a password.
  • POST to https://api.perplexity.ai/chat/completions: Main endpoint for sending queries and receiving responses.
  • Model parameter “sonar-pro”: Balances speed and accuracy for most use cases.

ADVERTISEMENT

Overview of the Perplexity API for Node.js

The Perplexity API is a RESTful interface that accepts HTTP requests and returns JSON responses. It uses the same chat completion format as OpenAI, making it familiar to developers who have worked with similar APIs. The primary endpoint is https://api.perplexity.ai/chat/completions and it supports multiple models including sonar-pro, sonar-reasoning-pro, and sonar-deep-research.

Before you can send requests, you need a valid API key. You generate this key from your Perplexity account settings. The key must be included in the Authorization header of every request. The API currently supports streaming responses, system prompts, and adjustable parameters such as temperature and max tokens.

Prerequisites

You need Node.js version 18 or later installed on your machine. You also need a Perplexity Pro subscription or an API-only plan. The Node.js axios library is recommended for making HTTP requests, but you can also use the built-in fetch API available in Node 18+.

Steps to Create a Node.js Client for Perplexity API

  1. Obtain your API key
    Log in to Perplexity at https://www.perplexity.ai. Navigate to Settings > API. Click “Generate new key” and copy the key. Store it in an environment variable named PERPLEXITY_API_KEY for security.
  2. Initialize your Node.js project
    Run npm init -y in your project folder. Install the axios package by running npm install axios.
  3. Create the main script file
    Create a file named perplexity-client.js and open it in your editor.
  4. Write the request function
    Add the following code to the file:
    const axios = require('axios');
    require('dotenv').config();
    
    const PERPLEXITY_API_KEY = process.env.PERPLEXITY_API_KEY;
    
    async function askPerplexity(prompt, model = 'sonar-pro') {
      const response = await axios.post(
        'https://api.perplexity.ai/chat/completions',
        {
          model: model,
          messages: [
            { role: 'system', content: 'You are a helpful assistant.' },
            { role: 'user', content: prompt }
          ],
          max_tokens: 1024,
          temperature: 0.7
        },
        {
          headers: {
            'Authorization': `Bearer ${PERPLEXITY_API_KEY}`,
            'Content-Type': 'application/json'
          }
        }
      );
      return response.data.choices[0].message.content;
    }
    
    module.exports = { askPerplexity };
    
  5. Call the function from your application
    Create a file named index.js with the following content:
    const { askPerplexity } = require('./perplexity-client');
    
    async function main() {
      const answer = await askPerplexity('What are the latest developments in quantum computing?');
      console.log(answer);
    }
    
    main();
    
  6. Run the script
    Execute node index.js in your terminal. You should see a text response from Perplexity printed to the console.

Handling Streaming Responses

To receive responses in real time as tokens are generated, set the stream parameter to true in the request body. The response will be a Server-Sent Events (SSE) stream. Use the eventsource-parser npm package to parse the stream:

const { createParser } = require('eventsource-parser');

async function askPerplexityStream(prompt) {
  const response = await fetch('https://api.perplexity.ai/chat/completions', {
    method: 'POST',
    headers: {
      'Authorization': `Bearer ${process.env.PERPLEXITY_API_KEY}`,
      'Content-Type': 'application/json'
    },
    body: JSON.stringify({
      model: 'sonar-pro',
      messages: [{ role: 'user', content: prompt }],
      stream: true
    })
  });

  const parser = createParser((event) => {
    if (event.type === 'event' && event.data !== '[DONE]') {
      const parsed = JSON.parse(event.data);
      const text = parsed.choices[0]?.delta?.content || '';
      process.stdout.write(text);
    }
  });

  for await (const chunk of response.body) {
    parser.feed(new TextDecoder().decode(chunk));
  }
}

ADVERTISEMENT

Common Mistakes and Limitations

API Key Not Set as Environment Variable

If you hardcode the API key in your source code, it may be exposed in version control. Always store the key in a .env file and use dotenv to load it. Add .env to your .gitignore file.

Incorrect Model Name Causes 400 Error

The model names are case-sensitive and must match the exact strings listed in the Perplexity API documentation. Common valid names are sonar-pro, sonar-reasoning-pro, and sonar-deep-research. Using an invalid name returns a 400 Bad Request error.

Rate Limits Exceeded

Free-tier accounts have a limit of 10 requests per minute. Pro accounts allow up to 100 requests per minute. If you exceed the limit, the API returns a 429 Too Many Requests error. Implement exponential backoff in your client to handle this gracefully.

Max Tokens Too Low for Long Responses

If you set max_tokens too low, the response will be cut off. The maximum value depends on the model. For sonar-pro, you can set up to 4096 tokens. Adjust this parameter based on the expected length of the answer.

Perplexity API Models: Sonar Pro vs Sonar Reasoning Pro vs Sonar Deep Research

Item Sonar Pro Sonar Reasoning Pro Sonar Deep Research
Speed Fast Moderate Slow
Max tokens 4096 8192 16384
Best for General Q&A and quick searches Multi-step reasoning and analysis In-depth research with citations
Cost per request Low Medium High

You have now set up a working Node.js client for the Perplexity API. You can send queries, receive streaming responses, and handle common errors. Next, explore the sonar-deep-research model for tasks that require detailed citations and structured outputs. For advanced usage, consider adding a caching layer with Redis to reduce API costs for repeated questions.

ADVERTISEMENT