Mastodon ‘Database Connection Timeout’ on Self-Hosted Instance: Fix
🔍 WiseChecker

Mastodon ‘Database Connection Timeout’ on Self-Hosted Instance: Fix

When you self-host a Mastodon instance, the service depends on a PostgreSQL database to store posts, accounts, and timeline data. A “Database Connection Timeout” error means your Mastodon web or streaming processes cannot reach the database within the configured time limit. This typically occurs due to resource exhaustion, misconfigured connection pools, or network latency between the application server and the database server. This article explains the root causes of this timeout error and provides step-by-step fixes to restore normal operation.

Key Takeaways: Restoring Database Connectivity on Your Mastodon Instance

  • PostgreSQL max_connections setting: Limits simultaneous connections; increase it if your instance has many users or services.
  • Mastodon DB_POOL environment variable: Controls the connection pool size per process; set it to match PostgreSQL limits.
  • pg_isready command: Quick diagnostic tool to check if PostgreSQL is accepting connections.

Why Mastodon Reports a Database Connection Timeout

The Mastodon application uses the Ruby on Rails framework with the ActiveRecord ORM to communicate with PostgreSQL. When a web request or background job tries to open a database connection, the system waits for a pool slot or TCP handshake. If no slot becomes available within the timeout window — default 5 seconds — Rails raises a ActiveRecord::ConnectionTimeoutError. This error appears in logs as “could not obtain a database connection within 5.000 seconds” or similar.

Common Root Causes

Three factors cause the majority of timeouts:

  • Connection pool exhaustion: Each Mastodon process — Puma web server, Sidekiq workers, and streaming API — borrows connections from a local pool. If the pool size is too small and all slots are in use, new requests queue and eventually time out.
  • PostgreSQL connection limit reached: The database itself has a hard limit (default 100 connections). If Mastodon plus any other clients exceed this, PostgreSQL refuses new connections.
  • Network or resource bottlenecks: High CPU or memory usage on the database server, or a slow network link between the Mastodon container and the database host, can delay the handshake past the timeout.

Steps to Fix the Database Connection Timeout

These steps assume you have SSH access to your Mastodon server and can edit the environment configuration file — typically .env.production in the Mastodon directory. You also need sudo access to restart services and modify PostgreSQL settings.

  1. Check the current PostgreSQL connection limit
    Run sudo -u postgres psql -c "SHOW max_connections;" on the database server. The default is 100. If your instance serves more than a few dozen active users, increase this value.
  2. Increase max_connections in postgresql.conf
    Edit /etc/postgresql//main/postgresql.conf. Find the line max_connections = 100 and raise it to 200 or higher. For example, max_connections = 300. Restart PostgreSQL with sudo systemctl restart postgresql.
  3. Verify Mastodon’s DB_POOL setting
    Open .env.production in the Mastodon home directory. Look for DB_POOL. If absent, add DB_POOL=25. This value should not exceed max_connections divided by the total number of Mastodon processes. For a single Puma and one Sidekiq process, 25 is safe.
  4. Restart Mastodon services
    Run systemctl restart mastodon-web mastodon-sidekiq mastodon-streaming to load the new pool setting. If you use Docker Compose, run docker-compose restart.
  5. Monitor connection usage
    Use sudo -u postgres psql -c "SELECT count(
    ) FROM pg_stat_activity;" to see active connections. Compare this to the max_connections limit. If the count stays near the limit, you need a larger pool or more database resources.
  6. Reduce Sidekiq concurrency if needed
    In .env.production, check SIDEKIQ_CONCURRENCY. The default is 10. Each concurrent worker uses one database connection. If you have many Sidekiq processes, reduce concurrency to 5 and test.
  7. Add a database connection pooler like PgBouncer
    If your instance grows beyond 500 users, install PgBouncer. It acts as a lightweight proxy that reuses database connections. Configure PgBouncer with a pool size of 50 and set Mastodon’s DB_POOL to 5. This prevents connection storms.

If Mastodon Still Has Issues After the Main Fix

Timeout Persists After Increasing max_connections and DB_POOL

If the error continues, the database server itself may be overloaded. Check CPU and memory usage with htop or top. If PostgreSQL uses 100% CPU, consider upgrading the server hardware or moving the database to a dedicated machine. Also verify that the Mastodon server can reach the database host on port 5432 without packet loss. Run ping -c 10 [database-ip] and look for any dropped packets.

Connection Timeout Only Affects Sidekiq Background Jobs

Sidekiq workers handle tasks like delivering toots and processing media. If they time out but the web interface works, the Sidekiq process may have a separate REDIS_URL or database configuration. Ensure .env.production contains the same DATABASE_URL for both Puma and Sidekiq. Also check that Redis is not overloaded — a slow Redis can cause Sidekiq to hold database connections longer than expected.

Error Appears After Adding a New Mastodon Service

If you recently added a second Puma process or an extra Sidekiq queue, the total number of database connections may exceed max_connections. Multiply the number of processes by their DB_POOL values. For example, two Puma processes with DB_POOL=25 each use up to 50 connections. Add Sidekiq with concurrency 10 and you reach 60. Ensure the PostgreSQL max_connections is at least 20% above this total.

Mastodon DB_POOL vs PostgreSQL max_connections: Comparison

Item DB_POOL (Mastodon) max_connections (PostgreSQL)
Description Maximum number of database connections per Mastodon process Maximum total connections allowed by the database server
Default value 5 or 10 depending on Mastodon version 100
Where to set it .env.production file postgresql.conf
Effect of too low Connection timeout errors under load New connections rejected after limit is reached
Effect of too high Wastes memory; may exceed PostgreSQL limit Consumes RAM; can degrade performance
Recommended for small instance 10 100
Recommended for medium instance 25 200

Now you can diagnose and resolve “Database Connection Timeout” errors on your self-hosted Mastodon instance by adjusting max_connections and DB_POOL settings. Start by verifying the current PostgreSQL limit with pg_isready and the SHOW max_connections command. After applying the fixes, monitor connection counts with pg_stat_activity to confirm the timeout no longer occurs. For long-term stability on larger instances, consider deploying PgBouncer to pool connections efficiently and prevent future timeouts.