Mastodon Instance Slow to Load Federated Timeline: Causes
🔍 WiseChecker

Mastodon Instance Slow to Load Federated Timeline: Causes

When you open the Federated timeline in Mastodon, posts can take several seconds or even minutes to appear. This delay makes browsing the public feed frustrating, especially on large instances. The main cause is the sheer volume of data the server must fetch, filter, and display from other instances across the fediverse. This article explains why Federation feeds load slowly and what technical factors contribute to the lag.

Mastodon instances do not pre-load the entire Federated timeline for every user. Instead, the server queries its internal database for public posts from known instances on demand. When that database is large or the server hardware is limited, the query can be slow. This article covers the specific technical reasons behind slow Federated timeline loading and how instance size, database performance, and network latency each play a role.

Key Takeaways: Why the Federated Timeline Loads Slowly

  • Sidekiq queue processing delay: New posts from remote instances wait in a job queue before being stored in the local database, delaying availability in the timeline.
  • PostgreSQL query performance on large tables: Instances with millions of statuses cause the database engine to scan more rows, increasing query response time for timeline rendering.
  • Network latency between instances: Fetching posts from distant or overloaded remote servers adds round-trip time before content appears in the local Federated feed.

Why the Federated Timeline Suffers From Slow Loading

The Federated timeline is a live view of public posts from all instances that your instance knows about. Mastodon stores these posts in a single database table called statuses. When you open the Federated timeline, the Mastodon web app sends a request to the server API endpoint /api/v1/timelines/public. The server then runs a SQL query against the statuses table, filtering for public posts and ordering them by created_at descending.

On a small instance with fewer than 10,000 total statuses, this query completes in under 100 milliseconds. On a large instance with 1 million or more statuses, the same query can take 2 to 5 seconds. The delay is caused by the database engine scanning a large index to find the most recent posts. Without proper database indexing or sufficient RAM, the query becomes the primary bottleneck.

Another factor is the Sidekiq background job queue. When a remote instance sends a new post via ActivityPub, the receiving instance does not immediately insert it into the statuses table. Instead, the post enters a Sidekiq queue. If the queue is backlogged with many jobs, the post can take minutes to appear in the Federated timeline. The default Sidekiq concurrency setting of 25 threads on a single CPU core can quickly become overwhelmed on instances serving thousands of active users.

Database Indexing and Query Optimization

Mastodon uses PostgreSQL with the btree index on the statuses.created_at column. This index helps the database find recent posts quickly. However, as the table grows, the index itself becomes larger and slower to traverse. A missing or outdated index on statuses.visibility and statuses.local can force the database to filter public posts after scanning the entire index, doubling the query time.

Instance administrators can check query performance using EXPLAIN ANALYZE on the timeline query. If the query shows a sequential scan instead of an index scan, the database is reading every row in the table. This is the most common cause of Federated timeline slowness on older instances that have never run VACUUM or ANALYZE.

Network Latency and Remote Instance Response Times

Mastodon does not fetch posts from remote instances in real time. Instead, it relies on the remote instance pushing updates via HTTP POST requests to the local instance’s inbox endpoint. If the remote instance is geographically far or running on low-bandwidth hardware, the delivery of the ActivityPub payload can take several seconds. During peak hours, the remote instance may queue outgoing deliveries, adding further delay.

The local instance also periodically polls remote instances for posts from accounts it follows. This polling interval is controlled by the FETCH_REMOTE_STATUSES environment variable, which defaults to 15 minutes. If a remote instance becomes temporarily unreachable, the local instance retries the connection up to 5 times with exponential backoff. Each retry adds 30 seconds to 2 minutes of delay before the post appears in the Federated timeline.

Steps to Diagnose and Improve Federated Timeline Loading Speed

Before attempting any fixes, confirm that the slowness is specific to the Federated timeline and not the entire instance. Open the Home timeline and a local hashtag feed. If those load instantly, the issue is isolated to the Federated timeline. The following steps help identify the exact bottleneck.

  1. Check Sidekiq queue depth
    Open a terminal on the Mastodon server. Run sudo docker exec mastodon_sidekiq_1 sidekiqmon or sudo -u mastodon sidekiqmon if using system packages. Look at the default queue size. If it exceeds 5000 jobs, the queue is backlogged. A high queue depth means new posts wait longer before being stored in the database.
  2. Review PostgreSQL query performance
    Connect to the Mastodon database with sudo -u postgres psql mastodon_production. Run EXPLAIN ANALYZE SELECT id, created_at FROM statuses WHERE visibility = 0 AND local = false ORDER BY created_at DESC LIMIT 20;. Look for Seq Scan in the output. If present, the query is not using an index. Run VACUUM ANALYZE statuses; to update statistics and re-enable index usage.
  3. Check network latency to top remote instances
    Identify the remote instances that send the most posts to your Federated timeline. Use curl -w '%{time_total}' -o /dev/null -s https://remote.instance/api/v1/instance to measure response time. If the time exceeds 2 seconds, that remote instance is a bottleneck. Consider blocking or limiting that instance through the moderation panel.
  4. Increase Sidekiq concurrency
    Edit the Mastodon environment file .env.production. Set SIDEKIQ_CONCURRENCY=50 if the server has at least 4 CPU cores. Restart Sidekiq with systemctl restart mastodon-sidekiq. Higher concurrency allows more posts to be processed simultaneously, reducing queue wait time.
  5. Add a composite index on statuses table
    As a Mastodon database superuser, run CREATE INDEX CONCURRENTLY IF NOT EXISTS index_statuses_on_visibility_and_local_and_created_at ON statuses (visibility, local, created_at DESC);. This index allows PostgreSQL to retrieve public non-local posts in sorted order without scanning the entire table. The query time can drop from seconds to under 50 milliseconds.

If Mastodon Still Has Issues After the Main Fix

Federated Timeline Shows Old Posts First

When the database index is missing or the query uses a sequential scan, the timeline may return posts from weeks ago before showing recent ones. This happens because PostgreSQL sorts the entire result set in memory after scanning, which for large tables can take several seconds. The fix is the composite index from step 5 above. After creating the index, run ANALYZE statuses; to update the query planner.

Sidekiq Queue Grows Faster Than It Processes

If the Sidekiq queue keeps growing even after increasing concurrency, the server CPU or RAM is saturated. Use htop to check CPU usage. If all cores are at 100%, the server hardware cannot handle the instance load. The only long-term fix is upgrading to a server with more CPU cores or reducing the number of active users by limiting signups.

Federated Timeline Loads in Browser but Not in Third-Party Apps

Third-party apps may use a different API endpoint that applies additional filters. For example, the Tusky app for Android uses /api/v1/timelines/public?local=false&only_media=false. If the server has a rate limit or a custom middleware that blocks non-browser requests, the app sees a timeout. Check the Mastodon reverse proxy logs (nginx or Apache) for 429 or 503 responses to the API endpoint.

Mastodon Instance Performance: Federated Timeline vs Home Timeline

Item Federated Timeline Home Timeline
Data source All public posts from all known remote instances Posts from accounts the user follows plus boosts from those accounts
Database query complexity Full scan of statuses table filtered by visibility and local flag Indexed lookup on followers and statuses tables using user ID
Typical query time on large instance 2–5 seconds Under 200 milliseconds
Cacheability Cannot be cached per user due to varying remote content Cacheable per user for up to 60 seconds
Primary bottleneck Database index and Sidekiq queue depth Follow count and database join performance

After diagnosing the cause, you can apply the database indexing fix and adjust Sidekiq concurrency to reduce Federated timeline loading time. Monitor the Sidekiq queue depth daily using a cron job that logs the queue size. If the queue stays below 1000 jobs, the instance can handle the current load. For persistent slowness, consider enabling the LIMITED_FEDERATION_MODE environment variable to restrict which remote instances your instance accepts posts from, which reduces the total number of statuses in the database.