If you run a self-hosted Mastodon instance, losing the database means losing all user accounts, posts, follows, and settings. The database is the central component that stores every piece of data except media files and custom emoji. Without a recent backup, an accidental deletion, server crash, or software update failure can destroy your community. This article explains how to create a consistent PostgreSQL dump of your Mastodon database, automate the process, and verify the backup file is usable.
Key Takeaways: Backing Up Your Mastodon PostgreSQL Database
- pg_dump with custom format: Creates a compressed, parallel-compatible backup of the Mastodon database.
- systemd timer unit: Automates daily database backups without relying on cron.
- pg_restore –list check: Verifies the integrity of the backup file before you need it for recovery.
Why the Mastodon Database Needs a Dedicated Backup Strategy
Mastodon uses PostgreSQL as its database backend. The database contains user profiles, statuses, follows, blocks, reports, and application settings. Media attachments and custom emoji are stored separately on the filesystem, but the database holds all the references and metadata.
A simple file copy of the PostgreSQL data directory is not safe. PostgreSQL uses write-ahead logs and may have unflushed data in memory. Running pg_dump or pg_dumpall while the database is live produces a consistent snapshot without locking the entire database. The custom format (-Fc) compresses the output and allows parallel restores, which is essential for larger instances.
The built-in Mastodon backup rake task mastodon:backup:create is a convenient wrapper, but it does not cover every scenario. A manual pg_dump gives you full control over compression, parallelism, and encryption. For production instances, both methods should be understood.
Steps to Create a PostgreSQL Dump of Your Mastodon Instance
Before starting, confirm you have the following:
- SSH access to the Mastodon server with sudo privileges.
- PostgreSQL client tools installed (
pg_dump,psql). - Enough free disk space for the backup file. A small instance with 100 users may produce a 50 MB dump; a large instance with 10,000 users may produce 5 GB or more.
- The Mastodon database name, username, and password. These are in the
.env.productionfile underDB_NAME,DB_USER, andDB_PASS.
- Switch to the mastodon user
Runsudo -i -u mastodonto work as the user that owns the Mastodon process. This avoids permission errors when reading the.env.productionfile. - Read the database credentials
Open~/.env.productionand note the values forDB_NAME,DB_USER, andDB_PASS. The default database name ismastodon_productionand the default user ismastodon. - Run pg_dump with the custom format
Execute the following command, replacing placeholders with your actual values:pg_dump -Fc -h localhost -U DB_USER DB_NAME > /home/mastodon/backups/mastodon_$(date +%Y%m%d_%H%M%S).dump
The-Fcflag selects the custom format, which is compressed and supports parallel restore. The-h localhostflag connects to the local PostgreSQL instance. The output filename includes a timestamp so you can keep multiple versions. - Enter the database password
pg_dump will prompt for the password. To avoid interactive prompts in scripts, create a.pgpassfile:echo 'localhost:5432:DB_NAME:DB_USER:DB_PASS' > /home/mastodon/.pgpass && chmod 600 /home/mastodon/.pgpass
Now pg_dump will read the password automatically. - Verify the dump file
Runpg_restore --list /path/to/your.dump | head -20. This lists the contents of the dump without restoring it. If the command returns a table of contents, the backup is valid. If it returns an error, the dump is corrupt and must be recreated. - Copy the backup to a remote location
Usersyncorscpto transfer the dump file to a separate server or cloud storage. Never keep the only copy on the same machine as the Mastodon instance. Example:rsync -avz /home/mastodon/backups/ user@offsite-server:/backups/mastodon/
Using the Mastodon Rake Task for Backup
Mastodon provides a rake task that wraps pg_dump and also backs up media files. Run this as the mastodon user:
- Run the backup rake task
RAILS_ENV=production bundle exec rake mastodon:backup:create
This creates a.dumpfile and a.tar.gzof thepublic/systemdirectory in~/backups/. - Locate the output
The files are stored in/home/mastodon/backups/with timestamps. The database dump is namedmastodon_production.dumpand the media archive is namedmastodon_production.tar.gz. - Remove old backups
The rake task does not delete old backups. Create a separate script to remove files older than 7 or 30 days.
Automating Database Backups with systemd
Manual backups are unreliable. Use systemd timers to run pg_dump daily. This example assumes you have already created the .pgpass file.
- Create the backup script
Write a script at/usr/local/bin/mastodon-backup-db.shwith the following content:#!/bin/bash
BACKUP_DIR=/home/mastodon/backups
DB_NAME=mastodon_production
DB_USER=mastodon
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
pg_dump -Fc -h localhost -U $DB_USER $DB_NAME > $BACKUP_DIR/mastodon_$TIMESTAMP.dump
# Remove backups older than 30 days
find $BACKUP_DIR -name 'dump' -mtime +30 -delete - Make the script executable
sudo chmod +x /usr/local/bin/mastodon-backup-db.sh - Create a systemd service unit
Create/etc/systemd/system/mastodon-backup-db.servicewith:[Unit]
Description=Mastodon database backup[Service]
Type=oneshot
User=mastodon
ExecStart=/usr/local/bin/mastodon-backup-db.sh - Create a systemd timer unit
Create/etc/systemd/system/mastodon-backup-db.timerwith:[Unit]
Description=Daily Mastodon database backup[Timer]
OnCalendar=daily
Persistent=true[Install]
WantedBy=timers.target - Enable and start the timer
sudo systemctl daemon-reload
sudo systemctl enable mastodon-backup-db.timer
sudo systemctl start mastodon-backup-db.timer
To test the backup immediately, runsudo systemctl start mastodon-backup-db.service.
Common Backup Mistakes and How to Avoid Them
The pg_dump command hangs or times out on a large instance
Large instances with heavy write activity may cause pg_dump to wait for a consistent snapshot. Add the --no-blobs flag to exclude large objects, or use the -j flag for parallel dumping. Example: pg_dump -Fc -j 4 -h localhost -U mastodon mastodon_production > backup.dump uses four parallel jobs.
Backup file grows too large due to unused data
Mastodon does not automatically vacuum old data. Run VACUUM ANALYZE periodically to reclaim space. Schedule a weekly vacuum via a systemd timer: psql -h localhost -U mastodon -d mastodon_production -c 'VACUUM ANALYZE;'. This reduces backup size and improves query performance.
Backup fails because the PostgreSQL user lacks permissions
The mastodon database user typically has full ownership. If you created a separate backup user, grant the necessary permissions: GRANT ALL PRIVILEGES ON DATABASE mastodon_production TO backup_user; and GRANT pg_read_all_data TO backup_user; in PostgreSQL 15 and later.
pg_dump Custom Format vs Plain SQL Format
| Item | Custom Format (-Fc) | Plain SQL Format (-Fp) |
|---|---|---|
| Compression | Built-in compression reduces file size by 60-80% | No compression; output is plain text |
| Parallel restore | Supports pg_restore -j N for faster recovery |
Single-threaded restore only |
| Selective restore | Can restore individual tables or schemas | Must restore entire dump |
| Readability | Binary file; not human-readable | Plain SQL; can be edited with a text editor |
| Compatibility | Requires same or newer PostgreSQL version for restore | Works across most PostgreSQL versions |
For Mastodon backups, the custom format is strongly recommended. The compression saves disk space, and the parallel restore feature reduces downtime during recovery. Use plain SQL only if you need to edit the dump before restoring.
After setting up automated backups, verify the process by restoring the dump to a test database on a separate machine. Run pg_restore -d test_mastodon -j 4 /path/to/backup.dump and confirm that the test database contains the expected tables. This step confirms that your backup is not only created but also restorable. For additional safety, encrypt the dump with GPG before transferring it off-site: gpg --encrypt --recipient your-email backup.dump.