Back to Blog
Tutorial

5 Real-World Cron Jobs to Automate Your Life (With Working Examples)

Stop doing repetitive tasks manually. These 5 copy-paste cron jobs handle backups, cleanup, monitoring, and reporting automatically—with production-ready scripts included.

12 min read
By Cron Generator Team

5 Real-World Cron Jobs to Automate Your Life (With Working Examples)

Tired of running the same commands every day? Forgetting to back up your database until it's too late? Manually cleaning up files that should delete themselves?

This isn't another tutorial with toy examples like "echo hello world." These are production-ready cron jobs that solve actual problems. Copy, customize, deploy.

Each example includes:

  • ✅ The complete cron syntax
  • ✅ A working script you can actually use
  • ✅ Explanation of what it does and why
  • ✅ Common pitfalls to avoid

Let's automate the boring stuff.

1. Automated Database Backups (Because Disasters Happen)

The Problem

Your database contains everything important: user data, transactions, configurations. If something goes wrong—disk failure, accidental DELETE, ransomware—you need backups.

But manual backups suck:

  • You forget to run them
  • You run them inconsistently
  • You don't clean up old backups (filling up disk)
  • They're never there when you need them

The Solution

Automated daily backups with automatic cleanup of old files.

Cron Schedule:

30 2 * * * /usr/local/bin/backup-mysql.sh

Runs every day at 2:30 AM (low traffic time).

Script: /usr/local/bin/backup-mysql.sh

#!/bin/bash

# Configuration
DB_USER="root"
DB_PASSWORD="your_password_here"  # Better: Use ~/.my.cnf
DB_NAME="your_database"
BACKUP_DIR="/var/backups/mysql"
DATE=$(date +%Y%m%d_%H%M%S)
RETENTION_DAYS=7

# Create backup directory if it doesn't exist
mkdir -p "$BACKUP_DIR"

# Create backup with timestamp
mysqldump -u "$DB_USER" -p"$DB_PASSWORD" "$DB_NAME" | gzip > "$BACKUP_DIR/${DB_NAME}_${DATE}.sql.gz"

# Check if backup was successful
if [ $? -eq 0 ]; then
    echo "[$(date)] Backup successful: ${DB_NAME}_${DATE}.sql.gz"
    
    # Delete backups older than retention period
    find "$BACKUP_DIR" -name "${DB_NAME}_*.sql.gz" -type f -mtime +$RETENTION_DAYS -delete
    echo "[$(date)] Cleaned up backups older than $RETENTION_DAYS days"
else
    echo "[$(date)] ERROR: Backup failed!" >&2
    # Optional: Send alert email
    echo "Database backup failed on $(hostname)" | mail -s "BACKUP FAILED" admin@example.com
    exit 1
fi

Make it executable:

chmod +x /usr/local/bin/backup-mysql.sh

Security Best Practice

Never put passwords in scripts. Use MySQL's config file:

Create ~/.my.cnf:

[client]
user=root
password=your_password_here

Secure it:

chmod 600 ~/.my.cnf

Update script to use it:

mysqldump "$DB_NAME" | gzip > "$BACKUP_DIR/${DB_NAME}_${DATE}.sql.gz"

PostgreSQL Version

#!/bin/bash
DB_NAME="your_database"
BACKUP_DIR="/var/backups/postgres"
DATE=$(date +%Y%m%d_%H%M%S)

mkdir -p "$BACKUP_DIR"
pg_dump "$DB_NAME" | gzip > "$BACKUP_DIR/${DB_NAME}_${DATE}.sql.gz"

# Cleanup old backups
find "$BACKUP_DIR" -name "${DB_NAME}_*.sql.gz" -mtime +7 -delete

Cron entry:

30 2 * * * /usr/local/bin/backup-postgres.sh >> /var/log/postgres-backup.log 2>&1

Backup to Cloud Storage

Want off-site backups? Add this to your script:

# After creating local backup, sync to S3
aws s3 sync "$BACKUP_DIR" s3://your-bucket/mysql-backups/ --delete

# Or use rclone for any cloud provider
rclone sync "$BACKUP_DIR" remote:backups/mysql/

What Could Go Wrong?

Disk fills up: Set retention days and monitor disk space Backup takes too long: Consider incremental backups or split large databases Database locks: Use --single-transaction for InnoDB tables Forgotten password rotation: Document where credentials are stored


2. Intelligent Cache Clearing (Speed Without Stale Data)

The Problem

Caches are beautiful: they make your app fast. But stale caches are a nightmare:

  • Users see outdated content
  • Changes don't appear
  • Memory usage grows unchecked

You need to clear caches regularly, but not so often that you lose the performance benefit.

The Solution

Clear different caches at different intervals based on how critical freshness is.

Cron Schedule (Multiple Jobs):

# Clear application cache every hour
0 * * * * /usr/local/bin/clear-app-cache.sh

# Clear browser cache assets every 6 hours
0 */6 * * * /usr/local/bin/clear-browser-cache.sh

# Clear compiled templates daily at 3 AM
0 3 * * * /usr/local/bin/clear-template-cache.sh

# Clear old session files daily at 4 AM
0 4 * * * /usr/local/bin/clear-old-sessions.sh

Script: /usr/local/bin/clear-app-cache.sh (Laravel Example)

#!/bin/bash

# For Laravel applications
cd /var/www/myapp || exit 1

# Clear application cache
php artisan cache:clear

# Clear route cache
php artisan route:clear

# Clear config cache
php artisan config:clear

# Clear view cache
php artisan view:clear

echo "[$(date)] Laravel cache cleared successfully"

Script: /usr/local/bin/clear-browser-cache.sh

#!/bin/bash

# Clear Redis cache (if using Redis)
redis-cli FLUSHDB

# Or clear Memcached
echo "flush_all" | nc localhost 11211

# Or clear file-based cache
find /var/cache/app -type f -mtime +1 -delete

echo "[$(date)] Browser cache cleared"

Script: /usr/local/bin/clear-old-sessions.sh

#!/bin/bash

# For PHP sessions older than 24 hours
find /var/lib/php/sessions -name 'sess_*' -type f -mtime +1 -delete

# For custom session storage
find /var/www/app/storage/sessions -type f -mtime +1 -delete

echo "[$(date)] Cleared sessions older than 24 hours"

Framework-Specific Examples

WordPress:

#!/bin/bash
cd /var/www/wordpress

# Clear object cache
wp cache flush --allow-root

# Clear transients older than 30 days
wp transient delete --expired --allow-root

Django:

#!/bin/bash
cd /var/www/django-app
source venv/bin/activate

# Clear expired sessions
python manage.py clearsessions

# Clear cache
python manage.py clear_cache

Node.js/Next.js:

#!/bin/bash
cd /var/www/nextjs-app

# Clear Next.js build cache
rm -rf .next/cache

# Clear node_modules/.cache
rm -rf node_modules/.cache

Smart Cache Strategy

Not all caches should clear on the same schedule:

| Cache Type | Frequency | Reason | |-----------|-----------|---------| | API responses | 15 min - 1 hour | Data freshness critical | | Static assets | Daily | Rarely change | | Compiled templates | Daily | Only change on deploy | | User sessions | Daily | Security + cleanup | | Database query cache | Hourly | Balance speed + accuracy |

What Could Go Wrong?

Cache storm: All users hit database at once after cache clear. Stagger clearing or use cache warming. Performance drop: If cache clears too often, defeats the purpose. Lost data: Some "cache" is actually persistent. Know what you're clearing!


3. Automated Email Reports (Data While You Sleep)

The Problem

You need to check dashboards, run queries, export CSVs, and send reports to stakeholders. Doing this manually every morning wastes 30 minutes you'll never get back.

The Solution

Automated reports delivered to your inbox every morning before you start work.

Cron Schedule:

# Daily report at 7 AM (before work starts)
0 7 * * 1-5 /usr/local/bin/send-daily-report.sh

# Weekly summary every Monday at 8 AM
0 8 * * 1 /usr/local/bin/send-weekly-report.sh

# Monthly report on 1st of month at 9 AM
0 9 1 * * /usr/local/bin/send-monthly-report.sh

Script: /usr/local/bin/send-daily-report.sh

#!/bin/bash

# Configuration
REPORT_EMAIL="team@example.com"
DB_NAME="analytics"
TEMP_FILE="/tmp/daily_report_$(date +%Y%m%d).html"

# Get yesterday's date for report
YESTERDAY=$(date -d "yesterday" +%Y-%m-%d)

# Generate HTML report
cat > "$TEMP_FILE" <<EOF
<!DOCTYPE html>
<html>
<head>
    <style>
        body { font-family: Arial, sans-serif; }
        table { border-collapse: collapse; width: 100%; }
        th, td { border: 1px solid #ddd; padding: 8px; text-align: left; }
        th { background-color: #4CAF50; color: white; }
    </style>
</head>
<body>
    <h1>Daily Report - $YESTERDAY</h1>
    <h2>Key Metrics</h2>
    <table>
        <tr>
            <th>Metric</th>
            <th>Value</th>
        </tr>
EOF

# Query database and add to report
mysql -u root "$DB_NAME" -e "
    SELECT 'New Users', COUNT(*) 
    FROM users 
    WHERE DATE(created_at) = '$YESTERDAY'
    UNION ALL
    SELECT 'Total Orders', COUNT(*) 
    FROM orders 
    WHERE DATE(created_at) = '$YESTERDAY'
    UNION ALL
    SELECT 'Revenue', CONCAT('$', SUM(total)) 
    FROM orders 
    WHERE DATE(created_at) = '$YESTERDAY'
" -H >> "$TEMP_FILE"

cat >> "$TEMP_FILE" <<EOF
    </table>
</body>
</html>
EOF

# Send email with report
mail -s "Daily Report - $YESTERDAY" \
     -a "Content-Type: text/html" \
     "$REPORT_EMAIL" < "$TEMP_FILE"

# Cleanup
rm "$TEMP_FILE"

echo "[$(date)] Daily report sent to $REPORT_EMAIL"

Advanced: Reports with Charts

Use Python for more sophisticated reports:

Script: /usr/local/bin/send-weekly-report.py

#!/usr/bin/env python3
import pandas as pd
import matplotlib.pyplot as plt
from datetime import datetime, timedelta
import smtplib
from email.mime.multipart import MIMEMultipart
from email.mime.text import MIMEText
from email.mime.image import MIMEImage
import mysql.connector

# Database connection
db = mysql.connector.connect(
    host="localhost",
    user="root",
    password="password",
    database="analytics"
)

# Get last 7 days of data
query = """
    SELECT DATE(created_at) as date, COUNT(*) as count
    FROM users
    WHERE created_at >= DATE_SUB(CURDATE(), INTERVAL 7 DAY)
    GROUP BY DATE(created_at)
    ORDER BY date
"""

df = pd.read_sql(query, db)

# Create chart
plt.figure(figsize=(10, 6))
plt.plot(df['date'], df['count'], marker='o')
plt.title('New Users - Last 7 Days')
plt.xlabel('Date')
plt.ylabel('Count')
plt.xticks(rotation=45)
plt.tight_layout()
plt.savefig('/tmp/weekly_chart.png')

# Send email with attachment
msg = MIMEMultipart()
msg['From'] = 'reports@example.com'
msg['To'] = 'team@example.com'
msg['Subject'] = f'Weekly Report - {datetime.now().strftime("%Y-%m-%d")}'

body = f"""
<html>
<body>
    <h1>Weekly Report</h1>
    <p>Here's your user growth for the past week:</p>
    <img src="cid:chart">
    <h2>Summary</h2>
    <p>Total new users: {df['count'].sum()}</p>
    <p>Average per day: {df['count'].mean():.0f}</p>
</body>
</html>
"""

msg.attach(MIMEText(body, 'html'))

# Attach chart
with open('/tmp/weekly_chart.png', 'rb') as f:
    img = MIMEImage(f.read())
    img.add_header('Content-ID', '<chart>')
    msg.attach(img)

# Send
server = smtplib.SMTP('localhost', 25)
server.send_message(msg)
server.quit()

print(f"[{datetime.now()}] Weekly report sent")

Cron entry:

0 8 * * 1 /usr/local/bin/send-weekly-report.py >> /var/log/reports.log 2>&1

What Could Go Wrong?

Email gets marked as spam: Use proper SPF/DKIM records Report query is slow: Optimize queries or use materialized views Too much data in email: Link to dashboard instead of embedding everything Missing dependencies: Document required Python packages


4. System Health Monitoring (Know About Problems Before Users Do)

The Problem

Your server runs out of disk space at 3 AM. A critical service crashes but nobody notices until customers complain. SSL certificates expire and your site goes down.

You need automated monitoring that alerts you to problems before they become disasters.

The Solution

Automated health checks that notify you when something's wrong.

Cron Schedule:

# Check disk space every hour
0 * * * * /usr/local/bin/check-disk-space.sh

# Monitor critical services every 5 minutes
*/5 * * * * /usr/local/bin/check-services.sh

# Check SSL certificate expiration daily
0 9 * * * /usr/local/bin/check-ssl-expiry.sh

# Monitor website uptime every minute
* * * * * /usr/local/bin/check-website-up.sh

Script: /usr/local/bin/check-disk-space.sh

#!/bin/bash

# Alert if disk usage is above 80%
THRESHOLD=80
ALERT_EMAIL="admin@example.com"

# Check each mounted filesystem
df -H | grep -vE '^Filesystem|tmpfs|cdrom' | awk '{ print $5 " " $1 }' | while read output;
do
    usage=$(echo $output | awk '{ print $1}' | sed 's/%//g')
    partition=$(echo $output | awk '{ print $2 }')
    
    if [ $usage -ge $THRESHOLD ]; then
        echo "WARNING: Disk space on $partition is at ${usage}%" | \
        mail -s "ALERT: High Disk Usage on $(hostname)" "$ALERT_EMAIL"
    fi
done

Script: /usr/local/bin/check-services.sh

#!/bin/bash

SERVICES=("nginx" "mysql" "redis-server" "php-fpm")
ALERT_EMAIL="admin@example.com"

for service in "${SERVICES[@]}"; do
    if ! systemctl is-active --quiet "$service"; then
        # Service is down, try to restart
        systemctl restart "$service"
        
        # Check if restart worked
        sleep 2
        if systemctl is-active --quiet "$service"; then
            echo "$service was down but has been restarted successfully" | \
            mail -s "Service Recovered: $service on $(hostname)" "$ALERT_EMAIL"
        else
            echo "CRITICAL: $service is down and restart failed!" | \
            mail -s "URGENT: $service DOWN on $(hostname)" "$ALERT_EMAIL"
        fi
    fi
done

Script: /usr/local/bin/check-ssl-expiry.sh

#!/bin/bash

DOMAIN="example.com"
ALERT_EMAIL="admin@example.com"
WARNING_DAYS=30

# Get certificate expiration date
expiry_date=$(echo | openssl s_client -servername "$DOMAIN" -connect "$DOMAIN:443" 2>/dev/null | \
              openssl x509 -noout -enddate | cut -d= -f2)

# Convert to epoch
expiry_epoch=$(date -d "$expiry_date" +%s)
current_epoch=$(date +%s)
days_until_expiry=$(( ($expiry_epoch - $current_epoch) / 86400 ))

if [ $days_until_expiry -le $WARNING_DAYS ]; then
    echo "SSL certificate for $DOMAIN expires in $days_until_expiry days!" | \
    mail -s "SSL Certificate Expiring Soon: $DOMAIN" "$ALERT_EMAIL"
fi

Script: /usr/local/bin/check-website-up.sh

#!/bin/bash

URL="https://example.com"
ALERT_EMAIL="admin@example.com"
LOCKFILE="/tmp/website_down_alert.lock"

# Check if website responds with 200
status_code=$(curl -o /dev/null -s -w "%{http_code}" "$URL")

if [ "$status_code" != "200" ]; then
    # Only send alert once (not every minute)
    if [ ! -f "$LOCKFILE" ]; then
        echo "Website $URL returned status code $status_code" | \
        mail -s "URGENT: Website DOWN" "$ALERT_EMAIL"
        touch "$LOCKFILE"
    fi
else
    # Website is up, remove lockfile if it exists
    if [ -f "$LOCKFILE" ]; then
        echo "Website $URL is back online" | \
        mail -s "Website Recovered" "$ALERT_EMAIL"
        rm "$LOCKFILE"
    fi
fi

Advanced Monitoring with Metrics

Collect metrics to a time-series database:

#!/bin/bash
# Send metrics to InfluxDB or Prometheus

# CPU usage
cpu_usage=$(top -bn1 | grep "Cpu(s)" | awk '{print $2}' | cut -d'%' -f1)

# Memory usage
mem_usage=$(free | grep Mem | awk '{print ($3/$2) * 100.0}')

# Send to monitoring system
curl -X POST 'http://influxdb:8086/write?db=metrics' \
     --data-binary "cpu,host=$(hostname) value=$cpu_usage"

What Could Go Wrong?

Alert fatigue: Too many alerts = ignored alerts. Tune thresholds carefully. Email delays: Use SMS/Slack for critical alerts. False positives: Add retry logic and grace periods. Monitoring the monitor: Who watches the watchmen? Use external monitoring services.


5. Automated Log Rotation and Cleanup (Before You Run Out of Space)

The Problem

Application logs grow forever. Eventually:

  • Your disk fills up (100% usage = system crash)
  • Logs become too large to analyze
  • Backups take forever because of multi-GB log files
  • You can't find recent errors in the noise

The Solution

Automated log rotation, compression, and cleanup.

Cron Schedule:

# Rotate logs daily at 2 AM
0 2 * * * /usr/local/bin/rotate-logs.sh

# Compress old logs weekly
0 3 * * 0 /usr/local/bin/compress-old-logs.sh

# Delete ancient logs monthly
0 4 1 * * /usr/local/bin/cleanup-old-logs.sh

Script: /usr/local/bin/rotate-logs.sh

#!/bin/bash

LOG_DIR="/var/log/myapp"
DATE=$(date +%Y%m%d)

# Rotate application logs
for log in "$LOG_DIR"/*.log; do
    if [ -f "$log" ]; then
        # Copy current log to dated file
        cp "$log" "${log}.${DATE}"
        
        # Truncate current log (keep file handle open)
        : > "$log"
        
        echo "[$(date)] Rotated: $log"
    fi
done

# Reload application to reopen log files (if needed)
# systemctl reload myapp

Script: /usr/local/bin/compress-old-logs.sh

#!/bin/bash

LOG_DIR="/var/log/myapp"

# Compress logs older than 1 day
find "$LOG_DIR" -name "*.log.20*" -type f -mtime +1 ! -name "*.gz" -exec gzip {} \;

echo "[$(date)] Compressed old logs in $LOG_DIR"

Script: /usr/local/bin/cleanup-old-logs.sh

#!/bin/bash

LOG_DIR="/var/log/myapp"
RETENTION_DAYS=90

# Delete compressed logs older than retention period
find "$LOG_DIR" -name "*.log.*.gz" -type f -mtime +$RETENTION_DAYS -delete

echo "[$(date)] Deleted logs older than $RETENTION_DAYS days"

Using logrotate (The Standard Tool)

Instead of custom scripts, use logrotate:

Create /etc/logrotate.d/myapp:

/var/log/myapp/*.log {
    daily
    rotate 7
    compress
    delaycompress
    missingok
    notifempty
    create 0640 www-data www-data
    sharedscripts
    postrotate
        systemctl reload myapp > /dev/null 2>&1 || true
    endscript
}

Explanation:

  • daily: Rotate every day
  • rotate 7: Keep 7 days of logs
  • compress: Gzip old logs
  • delaycompress: Don't compress yesterday's log (still being written to)
  • missingok: Don't error if log is missing
  • notifempty: Don't rotate empty logs
  • create: Permissions for new log file
  • postrotate: Reload service after rotation

Cron handles this automatically:

# logrotate runs daily via cron
ls /etc/cron.daily/logrotate

Nginx/Apache Log Rotation

Nginx:

/var/log/nginx/*.log {
    daily
    rotate 14
    compress
    delaycompress
    notifempty
    create 0640 nginx nginx
    sharedscripts
    postrotate
        [ -f /var/run/nginx.pid ] && kill -USR1 `cat /var/run/nginx.pid`
    endscript
}

Apache:

/var/log/apache2/*.log {
    daily
    rotate 14
    compress
    delaycompress
    notifempty
    create 0640 root adm
    sharedscripts
    postrotate
        /etc/init.d/apache2 reload > /dev/null
    endscript
}

Smart Log Management

Different logs need different retention:

| Log Type | Retention | Reason | |----------|-----------|---------| | Access logs | 7-14 days | High volume, low value | | Error logs | 30 days | Debugging historical issues | | Security logs | 90-365 days | Compliance, forensics | | Transaction logs | 7 years | Legal requirements | | Debug logs | 3 days | Only for active debugging |

What Could Go Wrong?

Logs still growing: Check for log rotation working correctly Application can't write logs: Permission issues after rotation Compressed logs needed urgently: Keep recent logs uncompressed Running out of inodes: Too many small files; compress more aggressively


Putting It All Together

Here's a complete crontab with all five automations:

# Environment variables
SHELL=/bin/bash
PATH=/usr/local/bin:/usr/bin:/bin
MAILTO=admin@example.com

# Database backups - daily at 2:30 AM
30 2 * * * /usr/local/bin/backup-mysql.sh >> /var/log/backup.log 2>&1

# Cache clearing
0 * * * * /usr/local/bin/clear-app-cache.sh
0 */6 * * * /usr/local/bin/clear-browser-cache.sh
0 3 * * * /usr/local/bin/clear-template-cache.sh
0 4 * * * /usr/local/bin/clear-old-sessions.sh

# Email reports
0 7 * * 1-5 /usr/local/bin/send-daily-report.sh
0 8 * * 1 /usr/local/bin/send-weekly-report.sh
0 9 1 * * /usr/local/bin/send-monthly-report.sh

# Health monitoring
0 * * * * /usr/local/bin/check-disk-space.sh
*/5 * * * * /usr/local/bin/check-services.sh
0 9 * * * /usr/local/bin/check-ssl-expiry.sh
* * * * * /usr/local/bin/check-website-up.sh

# Log management
0 2 * * * /usr/local/bin/rotate-logs.sh
0 3 * * 0 /usr/local/bin/compress-old-logs.sh
0 4 1 * * /usr/local/bin/cleanup-old-logs.sh

Best Practices Checklist

Before deploying any cron job:

  • ✅ Test the script manually first
  • ✅ Use absolute paths everywhere
  • ✅ Add logging with timestamps
  • ✅ Handle errors gracefully
  • ✅ Set appropriate permissions (chmod +x)
  • ✅ Add email alerts for critical failures
  • ✅ Document what each job does
  • ✅ Use locking for long-running tasks
  • ✅ Monitor that jobs actually run
  • ✅ Validate cron syntax before deploying

Create Your Own Automation

Need a cron job but not sure about the syntax?

Use our visual cron generator to:

  • Build expressions with dropdowns (no memorization needed)
  • See exactly when your job will run
  • Validate before deploying
  • Copy perfect syntax every time

Or paste your existing cron expressions into our decoder to verify they do what you think they do.

Stop repeating yourself. Start automating.


Related Articles

Master Cron Basics:

Troubleshooting:

Production-Ready Scripts:


All scripts tested on Ubuntu 22.04 LTS. Adjust paths and commands for your specific system.