Back to Blog
Tutorial

How to Automate Database Backups with Cron (MySQL & PostgreSQL Examples)

Learn how to automate critical database backups with cron jobs. Complete MySQL and PostgreSQL examples with commands you can copy and paste. Don't lose your data—automate your backups today.

10 min read
By Cron Generator Team

Your database is the heart of your application. Customer data, transactions, user accounts, content—everything lives there. Losing that data isn't just inconvenient. It can destroy your business.

Yet countless developers and businesses run production databases without automated backups. They rely on manual backups (which get forgotten) or cloud provider snapshots (which might not cover everything). Don't be one of them.

This guide shows you exactly how to automate database backups using cron jobs. Copy-paste ready commands. No complexity. Just reliable, automated protection for your most critical asset.

Why Automated Database Backups Are Non-Negotiable

The Hard Truth About Data Loss

When (not if) disaster strikes, you have two types of companies:

  1. Those with automated backups who recover in hours
  2. Those without backups who lose everything

Data loss happens from:

  • 🔥 Hardware failures - Disks die, servers crash
  • 🐛 Software bugs - A bad deploy wipes your production tables
  • 👤 Human error - DROP TABLE users in the wrong terminal
  • 🏴‍☠️ Security breaches - Ransomware, malicious deletions
  • Data corruption - Subtle bugs that corrupt data over time
  • 🌪️ Infrastructure disasters - Data center outages, network failures

The Cost of Not Having Backups

Let's be blunt about what you're risking:

Financial Impact:

  • Average cost of data loss: $3.86 million (IBM Study)
  • 60% of companies that lose data shut down within 6 months
  • Recovery from tape/manual backups: Days to weeks
  • Recovery from automated backups: Minutes to hours

Business Impact:

  • Lost customer trust (impossible to rebuild)
  • Legal liability (GDPR, data protection laws)
  • Competitive disadvantage (while competitors keep running)
  • Reputation damage (public data loss makes headlines)

Why Manual Backups Fail

"I'll remember to back it up weekly" never works:

  • ❌ You forget when busy
  • ❌ You skip it during deadlines
  • ❌ You assume someone else did it
  • ❌ You procrastinate because "nothing has gone wrong yet"
  • ❌ You stop doing it consistently after a few weeks

Automation removes human error. Set it once, protect forever.

The MySQL Backup Command

MySQL provides mysqldump, a command-line utility that creates a complete database backup as SQL statements. When restored, it rebuilds your database exactly as it was.

Basic MySQL Backup Command

mysqldump -u username -p database_name > backup.sql

What this does:

  • -u username - Your MySQL username
  • -p - Prompts for password (more on this below)
  • database_name - The database to back up
  • > backup.sql - Saves output to a file

Problems with this basic command:

  1. Prompts for password (doesn't work in cron)
  2. Overwrites the same file every time (no history)
  3. No error handling

Production-Ready MySQL Backup Command

Here's the command you should actually use:

mysqldump -u backup_user -p'YourSecurePassword' \
  --single-transaction \
  --routines \
  --triggers \
  --events \
  database_name | gzip > /backups/mysql/database_$(date +\%Y\%m\%d_\%H\%M\%S).sql.gz

Breaking down each option:

-u backup_user - Dedicated backup user (not root)

-p'YourSecurePassword' - Password directly in command

  • Note: No space between -p and password
  • Single quotes protect special characters
  • Better: Use .my.cnf file (shown below)

--single-transaction - Critical for InnoDB tables

  • Creates consistent snapshot without locking tables
  • Your app keeps running during backup
  • Ensures data integrity

--routines - Includes stored procedures and functions

--triggers - Includes triggers

--events - Includes scheduled events

| gzip - Compresses output (saves 70-90% space)

> /backups/mysql/database_$(date +\%Y\%m\%d_\%H\%M\%S).sql.gz

  • Timestamped filename: database_20250109_143022.sql.gz
  • Never overwrites previous backups
  • Easy to identify when backup was taken

Securing MySQL Password for Cron

Never put passwords in cron directly. Use a MySQL configuration file instead:

Step 1: Create .my.cnf file

nano ~/.my.cnf

Add this content:

[mysqldump]
user=backup_user
password=YourSecurePassword

Step 2: Secure the file

chmod 600 ~/.my.cnf
chown backup_user:backup_user ~/.my.cnf

This makes the file readable only by the backup user.

Step 3: Use the configuration file

Now your backup command becomes:

mysqldump --defaults-extra-file=/home/backup_user/.my.cnf \
  --single-transaction \
  --routines \
  --triggers \
  --events \
  database_name | gzip > /backups/mysql/database_$(date +\%Y\%m\%d_\%H\%M\%S).sql.gz

Backing Up ALL MySQL Databases

To back up every database on your MySQL server:

mysqldump --defaults-extra-file=/home/backup_user/.my.cnf \
  --all-databases \
  --single-transaction \
  --routines \
  --triggers \
  --events | gzip > /backups/mysql/all_databases_$(date +\%Y\%m\%d_\%H\%M\%S).sql.gz

The --all-databases flag includes everything: all databases, users, and permissions.

MySQL Backup Script with Error Handling

For production, wrap the command in a shell script:

#!/bin/bash
# File: /usr/local/bin/backup-mysql.sh

# Configuration
BACKUP_DIR="/backups/mysql"
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
BACKUP_FILE="$BACKUP_DIR/mydb_$TIMESTAMP.sql.gz"
LOG_FILE="/var/log/mysql-backup.log"

# Create backup directory if it doesn't exist
mkdir -p "$BACKUP_DIR"

# Log start
echo "[$(date)] Starting MySQL backup" >> "$LOG_FILE"

# Perform backup
if mysqldump --defaults-extra-file=/root/.my.cnf \
    --single-transaction \
    --routines \
    --triggers \
    --events \
    mydb | gzip > "$BACKUP_FILE"; then
    
    # Success
    echo "[$(date)] Backup successful: $BACKUP_FILE" >> "$LOG_FILE"
    
    # Verify backup file exists and is not empty
    if [ -s "$BACKUP_FILE" ]; then
        echo "[$(date)] Backup file verified: $(du -h $BACKUP_FILE | cut -f1)" >> "$LOG_FILE"
    else
        echo "[$(date)] ERROR: Backup file is empty!" >> "$LOG_FILE"
        exit 1
    fi
else
    # Failure
    echo "[$(date)] ERROR: Backup failed!" >> "$LOG_FILE"
    exit 1
fi

echo "[$(date)] Backup complete" >> "$LOG_FILE"

Make it executable:

chmod +x /usr/local/bin/backup-mysql.sh

The PostgreSQL Backup Command

PostgreSQL uses pg_dump for database backups. It's similar to mysqldump but with PostgreSQL-specific options.

Basic PostgreSQL Backup Command

pg_dump -U username database_name > backup.sql

Problems with basic command:

  • Prompts for password (won't work in cron)
  • No compression
  • No timestamping

Production-Ready PostgreSQL Backup Command

pg_dump -U backup_user -h localhost \
  -F c \
  -b \
  -v \
  -f /backups/postgres/database_$(date +\%Y\%m\%d_\%H\%M\%S).dump \
  database_name

Breaking down each option:

-U backup_user - PostgreSQL username

-h localhost - Database host (use your actual host)

-F c - Custom format (compressed and best for pg_restore)

  • Alternative: -F p for plain SQL (human-readable)
  • Alternative: -F t for tar format

-b - Include large objects (BLOBs)

-v - Verbose output (helpful for debugging)

-f /backups/postgres/database_$(date +\%Y\%m\%d_\%H\%M\%S).dump

  • Output filename with timestamp
  • .dump extension indicates custom format

database_name - The database to back up

Securing PostgreSQL Password for Cron

PostgreSQL uses a .pgpass file for password storage:

Step 1: Create .pgpass file

nano ~/.pgpass

Add this line:

hostname:port:database:username:password

For example:

localhost:5432:mydb:backup_user:YourSecurePassword

Or use * for wildcards:

*:*:*:backup_user:YourSecurePassword

Step 2: Secure the file

chmod 600 ~/.pgpass

PostgreSQL requires 600 permissions or it will ignore the file.

Step 3: Use without password prompt

Now your backup command works without prompting:

pg_dump -U backup_user -h localhost \
  -F c \
  -b \
  -v \
  -f /backups/postgres/mydb_$(date +\%Y\%m\%d_\%H\%M\%S).dump \
  mydb

Backing Up ALL PostgreSQL Databases

To back up all databases on your PostgreSQL server:

pg_dumpall -U postgres \
  -h localhost \
  -f /backups/postgres/all_databases_$(date +\%Y\%m\%d_\%H\%M\%S).sql

pg_dumpall includes:

  • All databases
  • Roles (users)
  • Tablespaces
  • Global objects

Note: pg_dumpall only supports plain SQL format (-F p), not custom format.

PostgreSQL Backup Script with Error Handling

#!/bin/bash
# File: /usr/local/bin/backup-postgres.sh

# Configuration
BACKUP_DIR="/backups/postgres"
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
BACKUP_FILE="$BACKUP_DIR/mydb_$TIMESTAMP.dump"
LOG_FILE="/var/log/postgres-backup.log"

# Create backup directory
mkdir -p "$BACKUP_DIR"

# Log start
echo "[$(date)] Starting PostgreSQL backup" >> "$LOG_FILE"

# Perform backup
if pg_dump -U backup_user -h localhost \
    -F c \
    -b \
    -v \
    -f "$BACKUP_FILE" \
    mydb 2>&1 | tee -a "$LOG_FILE"; then
    
    # Success
    echo "[$(date)] Backup successful: $BACKUP_FILE" >> "$LOG_FILE"
    
    # Verify backup
    if [ -s "$BACKUP_FILE" ]; then
        echo "[$(date)] Backup verified: $(du -h $BACKUP_FILE | cut -f1)" >> "$LOG_FILE"
    else
        echo "[$(date)] ERROR: Backup file is empty!" >> "$LOG_FILE"
        exit 1
    fi
else
    # Failure
    echo "[$(date)] ERROR: Backup failed!" >> "$LOG_FILE"
    exit 1
fi

echo "[$(date)] Backup complete" >> "$LOG_FILE"

Make it executable:

chmod +x /usr/local/bin/backup-postgres.sh

Putting It in Cron

Now let's schedule these backups to run automatically.

MySQL Cron Job Examples

Daily backup at 2 AM:

0 2 * * * /usr/local/bin/backup-mysql.sh >> /var/log/mysql-backup-cron.log 2>&1

Every 6 hours:

0 */6 * * * /usr/local/bin/backup-mysql.sh >> /var/log/mysql-backup-cron.log 2>&1

Every hour during business hours (9 AM - 5 PM):

0 9-17 * * * /usr/local/bin/backup-mysql.sh >> /var/log/mysql-backup-cron.log 2>&1

Complete cron entry with email notification on failure:

MAILTO=admin@example.com

# MySQL backup every day at 2 AM
0 2 * * * /usr/local/bin/backup-mysql.sh >> /var/log/mysql-backup-cron.log 2>&1 || echo "MySQL backup failed on $(hostname)" | mail -s "BACKUP FAILURE" admin@example.com

PostgreSQL Cron Job Examples

Daily backup at 3 AM:

0 3 * * * /usr/local/bin/backup-postgres.sh >> /var/log/postgres-backup-cron.log 2>&1

Every 12 hours:

0 */12 * * * /usr/local/bin/backup-postgres.sh >> /var/log/postgres-backup-cron.log 2>&1

Weekly backup every Sunday at 1 AM:

0 1 * * 0 /usr/local/bin/backup-postgres.sh >> /var/log/postgres-backup-cron.log 2>&1

Combined MySQL + PostgreSQL Backup Schedule

If you're running both databases:

# MySQL backup at 2 AM
0 2 * * * /usr/local/bin/backup-mysql.sh >> /var/log/mysql-backup-cron.log 2>&1

# PostgreSQL backup at 3 AM
0 3 * * * /usr/local/bin/backup-postgres.sh >> /var/log/postgres-backup-cron.log 2>&1

Stagger the times to avoid I/O contention.

Adding the Cron Jobs

Step 1: Open crontab

crontab -e

Step 2: Add your backup schedule

Paste the cron entries from above.

Step 3: Verify cron jobs are scheduled

crontab -l

Step 4: Test manually first

Before relying on cron, test your backup script manually:

/usr/local/bin/backup-mysql.sh

Check:

  1. Backup file was created
  2. File is not empty (ls -lh /backups/mysql/)
  3. Log file shows success
  4. You can restore from the backup (test on a dev database)

Bonus: Cleaning Up Old Backups

Backups consume disk space. Eventually, you'll run out. Implement automatic cleanup to keep only recent backups.

Delete Backups Older Than 30 Days

For MySQL:

find /backups/mysql/ -name "*.sql.gz" -type f -mtime +30 -delete

For PostgreSQL:

find /backups/postgres/ -name "*.dump" -type f -mtime +30 -delete

What this does:

  • find /backups/mysql/ - Search in backup directory
  • -name "*.sql.gz" - Match backup files
  • -type f - Files only (not directories)
  • -mtime +30 - Modified more than 30 days ago
  • -delete - Delete matching files

Cleanup Script with Safety Checks

#!/bin/bash
# File: /usr/local/bin/cleanup-old-backups.sh

BACKUP_DIR="/backups/mysql"
RETENTION_DAYS=30
LOG_FILE="/var/log/backup-cleanup.log"

echo "[$(date)] Starting backup cleanup (retention: $RETENTION_DAYS days)" >> "$LOG_FILE"

# Count files before deletion
BEFORE=$(find "$BACKUP_DIR" -name "*.sql.gz" -type f | wc -l)

# Delete old backups
DELETED=$(find "$BACKUP_DIR" -name "*.sql.gz" -type f -mtime +$RETENTION_DAYS -delete -print | wc -l)

# Count files after deletion
AFTER=$(find "$BACKUP_DIR" -name "*.sql.gz" -type f | wc -l)

echo "[$(date)] Cleanup complete. Before: $BEFORE, Deleted: $DELETED, After: $AFTER" >> "$LOG_FILE"

# Alert if all backups were deleted (something is wrong)
if [ $AFTER -eq 0 ]; then
    echo "[$(date)] WARNING: No backups remaining!" >> "$LOG_FILE"
    echo "WARNING: Backup cleanup deleted ALL backups on $(hostname)" | mail -s "BACKUP CLEANUP WARNING" admin@example.com
fi

Make it executable:

chmod +x /usr/local/bin/cleanup-old-backups.sh

Add Cleanup to Cron

Run cleanup daily at 4 AM (after backups complete):

0 4 * * * /usr/local/bin/cleanup-old-backups.sh >> /var/log/backup-cleanup-cron.log 2>&1

Smart Retention Strategy

Instead of flat 30-day retention, keep:

  • Daily backups for the last 7 days
  • Weekly backups for the last 4 weeks
  • Monthly backups for the last 12 months

This script implements tiered retention:

#!/bin/bash
# File: /usr/local/bin/smart-backup-retention.sh

BACKUP_DIR="/backups/mysql"

# Keep daily backups for 7 days
find "$BACKUP_DIR" -name "*.sql.gz" -type f -mtime +7 -mtime -30 -exec bash -c '
    # Delete if not from Sunday (weekly backup day)
    if [ $(date -r "$1" +%u) -ne 7 ]; then
        rm "$1"
    fi
' _ {} \;

# Keep weekly backups (Sundays) for 30 days
find "$BACKUP_DIR" -name "*.sql.gz" -type f -mtime +30 -mtime -365 -exec bash -c '
    # Delete if not from 1st of month (monthly backup day)
    if [ $(date -r "$1" +%d) -ne 01 ]; then
        rm "$1"
    fi
' _ {} \;

# Delete anything older than 1 year
find "$BACKUP_DIR" -name "*.sql.gz" -type f -mtime +365 -delete

Backup Verification and Monitoring

Creating backups is not enough. You must verify they work.

Test Restore Regularly

Once a month, restore a backup to a test database:

#!/bin/bash
# File: /usr/local/bin/test-restore.sh

# Get most recent MySQL backup
LATEST_BACKUP=$(ls -t /backups/mysql/*.sql.gz | head -1)

# Create test database
mysql -u root -p'password' -e "DROP DATABASE IF EXISTS test_restore; CREATE DATABASE test_restore;"

# Restore backup
zcat "$LATEST_BACKUP" | mysql -u root -p'password' test_restore

# Run basic checks
TABLE_COUNT=$(mysql -u root -p'password' -N -e "SELECT COUNT(*) FROM information_schema.tables WHERE table_schema='test_restore'")

echo "Restore test: $TABLE_COUNT tables restored from $LATEST_BACKUP"

# Cleanup
mysql -u root -p'password' -e "DROP DATABASE test_restore;"

Schedule this monthly:

0 5 1 * * /usr/local/bin/test-restore.sh >> /var/log/restore-test.log 2>&1

Monitor Backup Size

Sudden changes in backup size indicate problems:

#!/bin/bash
# File: /usr/local/bin/monitor-backup-size.sh

BACKUP_DIR="/backups/mysql"
LATEST_BACKUP=$(ls -t $BACKUP_DIR/*.sql.gz | head -1)
PREVIOUS_BACKUP=$(ls -t $BACKUP_DIR/*.sql.gz | head -2 | tail -1)

LATEST_SIZE=$(stat -f%z "$LATEST_BACKUP" 2>/dev/null || stat -c%s "$LATEST_BACKUP")
PREVIOUS_SIZE=$(stat -f%z "$PREVIOUS_BACKUP" 2>/dev/null || stat -c%s "$PREVIOUS_BACKUP")

# Calculate difference percentage
DIFF=$((100 * (LATEST_SIZE - PREVIOUS_SIZE) / PREVIOUS_SIZE))

# Alert if backup size changed by more than 50%
if [ ${DIFF#-} -gt 50 ]; then
    echo "WARNING: Backup size changed by $DIFF%" | mail -s "Backup Size Alert" admin@example.com
fi

Monitor Disk Space

Ensure backups don't fill your disk:

# Check available space before backup
AVAILABLE=$(df /backups | tail -1 | awk '{print $4}')
if [ $AVAILABLE -lt 10485760 ]; then  # Less than 10GB
    echo "ERROR: Low disk space for backups" | mail -s "BACKUP DISK SPACE ALERT" admin@example.com
    exit 1
fi

Where to Store Your Backups

Never store backups on the same server as your database. When disaster strikes (hardware failure, ransomware), you'll lose both your database AND your backups.

You need affordable, reliable offsite storage:

Complete Production Backup Setup

Here's a complete, production-ready setup combining everything:

Directory Structure

/backups/
├── mysql/
│   └── mydb_20250109_020000.sql.gz
├── postgres/
│   └── mydb_20250109_030000.dump
/usr/local/bin/
├── backup-mysql.sh
├── backup-postgres.sh
├── cleanup-old-backups.sh
└── test-restore.sh
/var/log/
├── mysql-backup.log
├── postgres-backup.log
└── backup-cleanup.log

Crontab Configuration

# Edit with: crontab -e

SHELL=/bin/bash
PATH=/usr/local/bin:/usr/bin:/bin
MAILTO=admin@example.com

# MySQL backup every day at 2 AM
0 2 * * * /usr/local/bin/backup-mysql.sh >> /var/log/mysql-backup-cron.log 2>&1

# PostgreSQL backup every day at 3 AM
0 3 * * * /usr/local/bin/backup-postgres.sh >> /var/log/postgres-backup-cron.log 2>&1

# Cleanup old backups at 4 AM
0 4 * * * /usr/local/bin/cleanup-old-backups.sh >> /var/log/backup-cleanup-cron.log 2>&1

# Test restore on 1st of each month at 5 AM
0 5 1 * * /usr/local/bin/test-restore.sh >> /var/log/restore-test.log 2>&1

Permissions Setup

# Create backup user (MySQL)
mysql -u root -p <<EOF
CREATE USER 'backup_user'@'localhost' IDENTIFIED BY 'SecurePassword123';
GRANT SELECT, LOCK TABLES, SHOW VIEW, EVENT, TRIGGER ON *.* TO 'backup_user'@'localhost';
FLUSH PRIVILEGES;
EOF

# Create backup user (PostgreSQL)
psql -U postgres <<EOF
CREATE USER backup_user WITH PASSWORD 'SecurePassword123';
GRANT pg_read_all_data TO backup_user;
EOF

# Set directory permissions
mkdir -p /backups/{mysql,postgres}
chown -R backup_user:backup_user /backups
chmod 700 /backups

Off-Site Backup Storage

Never keep backups only on the same server as your database.

If the server dies, you lose both database AND backups.

Copy Backups to Remote Server

#!/bin/bash
# File: /usr/local/bin/backup-to-remote.sh

REMOTE_HOST="backup-server.example.com"
REMOTE_USER="backup"
REMOTE_DIR="/remote-backups/myapp"
LOCAL_BACKUP_DIR="/backups"

# Sync backups to remote server
rsync -avz --delete \
    -e "ssh -i /root/.ssh/backup_key" \
    "$LOCAL_BACKUP_DIR/" \
    "$REMOTE_USER@$REMOTE_HOST:$REMOTE_DIR/"

Add to cron after backups complete:

0 5 * * * /usr/local/bin/backup-to-remote.sh >> /var/log/remote-backup.log 2>&1

Upload to Cloud Storage

AWS S3 example:

#!/bin/bash
# File: /usr/local/bin/backup-to-s3.sh

BACKUP_DIR="/backups/mysql"
S3_BUCKET="s3://my-database-backups"

# Upload today's backup to S3
aws s3 sync "$BACKUP_DIR" "$S3_BUCKET/mysql/" \
    --exclude "*" \
    --include "*$(date +%Y%m%d)*.sql.gz"

Final Checklist

Before you trust your automated backups:

Test backup creation - Run script manually, verify file exists
Test backup restore - Restore to test database, verify data
Verify password security - .my.cnf or .pgpass with 600 permissions
Check disk space - Ensure enough room for backups
Set up monitoring - Email alerts on failures
Test cron execution - Wait for scheduled time, verify backup runs
Configure retention - Don't fill your disk with old backups
Off-site storage - Copy backups to remote server or cloud
Document restore procedure - Write down exact steps to restore
Schedule restore tests - Monthly verification that backups work

Conclusion

Database backups are not optional. They're the difference between a recoverable incident and a catastrophic failure.

The setup in this guide gives you:

  • ✅ Automated daily backups (no human intervention needed)
  • ✅ Timestamped files (never overwrite previous backups)
  • ✅ Compressed storage (70-90% space savings)
  • ✅ Error logging (know when something fails)
  • ✅ Automatic cleanup (don't fill your disk)
  • ✅ Secure password handling (no plaintext passwords in cron)

Your next steps:

  1. Create the backup scripts (copy from above)
  2. Test them manually
  3. Add to crontab
  4. Test the restore process
  5. Set up off-site backup storage
  6. Sleep better knowing your data is protected

Don't wait until disaster strikes. Set up automated backups today.

Need help building the perfect cron schedule for your backups? Use our Cron Expression Generator to create and validate your backup schedule with confidence!


Related Articles

Cron Fundamentals:

More Automation Examples:

Security & Best Practices:


Keywords: database backup cron, mysql backup script cron, postgres backup cron, automate mysql backup, postgresql backup automation, mysqldump cron, pg_dump cron, database backup automation