Back to Blog
Tutorial

10 Essential Cron Jobs for Every Web Developer

Stop doing repetitive tasks manually. These 10 cron jobs will automate your most common web development tasks—from SSL renewals to cache clearing. Copy, paste, and reclaim your time.

8 min read
By Cron Generator Team

You're a web developer. You've got better things to do than manually clear cache directories, check for broken links, or remember to renew SSL certificates at 2 AM.

Yet most developers do these tasks manually. They set reminders. They forget. Things break. Clients call.

There's a better way: automation.

This article gives you 10 copy-paste ready cron jobs that handle the repetitive tasks every web developer faces. Each one saves you time, prevents disasters, and makes you look like a pro.

No theory. Just practical commands you can set up in 5 minutes and forget about forever.

1. Auto-Renew SSL Certificates (Let's Encrypt)

The Pain: SSL certificates expire. Sites go down. Browsers show scary warnings. Clients panic.

The Solution: Let Certbot handle renewals automatically.

# Run twice daily at 2:30 AM and 2:30 PM
30 2,14 * * * /usr/bin/certbot renew --quiet --deploy-hook "systemctl reload nginx"

What This Does:

  • Checks all certificates twice daily
  • Renews any certificate expiring within 30 days
  • Reloads Nginx automatically after renewal
  • Runs silently (no output unless there's an error)

Why Twice Daily? Let's Encrypt recommends it for redundancy. If the morning run fails, the afternoon run catches it.

Pro Tip: Add email notifications on failure:

MAILTO=admin@example.com
30 2,14 * * * /usr/bin/certbot renew --quiet --deploy-hook "systemctl reload nginx" || echo "SSL renewal failed on $(hostname)"

"I should be doing that" moment: No more 3 AM emergency certificate renewals. Ever.

2. Clear Application Cache Daily

The Pain: Cache grows forever. Disk fills up. Site slows down. You manually SSH in to clear it.

The Solution: Automatic cache cleanup every night.

# Clear Laravel cache at 3 AM
0 3 * * * cd /var/www/myapp && php artisan cache:clear && php artisan view:clear

Framework-Specific Examples:

Laravel:

0 3 * * * cd /var/www/laravel-app && php artisan cache:clear && php artisan view:clear && php artisan route:clear

Symfony:

0 3 * * * cd /var/www/symfony-app && php bin/console cache:clear --env=prod --no-warmup

Django:

0 3 * * * cd /var/www/django-app && source venv/bin/activate && python manage.py clear_cache

WordPress (object cache):

0 3 * * * wp cache flush --path=/var/www/wordpress

Node.js (Redis cache):

0 3 * * * redis-cli FLUSHDB

Why 3 AM? Low traffic time. Cache rebuilds before morning traffic surge.

Pro Tip: Add logging to track cache sizes:

0 3 * * * du -sh /var/www/myapp/storage/cache > /var/log/cache-size.log && cd /var/www/myapp && php artisan cache:clear

3. Generate XML Sitemap for SEO

The Pain: You publish new content. Google doesn't know about it. Rankings suffer. You forget to regenerate the sitemap.

The Solution: Auto-generate sitemap after every content update.

# Generate sitemap daily at 4 AM
0 4 * * * /usr/local/bin/generate-sitemap.sh

Sitemap Script (WordPress example):

#!/bin/bash
# File: /usr/local/bin/generate-sitemap.sh

cd /var/www/wordpress
wp sitemap generate --path=/var/www/wordpress
curl -s "http://www.google.com/ping?sitemap=https://yoursite.com/sitemap.xml"

Static Site (Node.js sitemap generator):

#!/bin/bash
cd /var/www/mysite
node generate-sitemap.js
echo "Sitemap generated at $(date)" >> /var/log/sitemap.log

Python/Django Sitemap:

0 4 * * * cd /var/www/django-app && source venv/bin/activate && python manage.py generate_sitemap

Why Daily? New content gets indexed faster. Google loves fresh sitemaps.

Pro Tip: Ping multiple search engines:

# Ping Google, Bing, and Yandex
curl -s "http://www.google.com/ping?sitemap=https://yoursite.com/sitemap.xml"
curl -s "http://www.bing.com/ping?sitemap=https://yoursite.com/sitemap.xml"
curl -s "https://webmaster.yandex.com/ping?sitemap=https://yoursite.com/sitemap.xml"

4. Check for Broken Links

The Pain: Dead links hurt SEO. Visitors hit 404s. You look unprofessional. You only discover broken links when users complain.

The Solution: Weekly broken link scan with email alerts.

# Check for broken links every Monday at 5 AM
0 5 * * 1 /usr/local/bin/check-broken-links.sh

Broken Link Checker Script:

#!/bin/bash
# File: /usr/local/bin/check-broken-links.sh

SITE_URL="https://yoursite.com"
REPORT_FILE="/var/log/broken-links-$(date +%Y%m%d).txt"
EMAIL="admin@example.com"

# Install: npm install -g broken-link-checker
blc "$SITE_URL" -ro --exclude linkedin.com --exclude facebook.com > "$REPORT_FILE"

# Count broken links
BROKEN_COUNT=$(grep "BROKEN" "$REPORT_FILE" | wc -l)

if [ $BROKEN_COUNT -gt 0 ]; then
    echo "Found $BROKEN_COUNT broken links on $SITE_URL" | mail -s "⚠️ Broken Links Detected" -a "$REPORT_FILE" "$EMAIL"
fi

Alternative: Using wget:

#!/bin/bash
wget --spider --recursive --level=3 --no-verbose --output-file=/tmp/wget-log.txt https://yoursite.com
grep -B 2 '404' /tmp/wget-log.txt | mail -s "Broken Links Report" admin@example.com

Why Weekly? Catches problems before they impact SEO. Monthly is too slow, daily is overkill.

Pro Tip: Exclude external links that frequently timeout:

blc "$SITE_URL" -ro --exclude linkedin.com --exclude facebook.com --exclude twitter.com --filter-level 3

5. Pull Latest Code from Git Repository

The Pain: You push to main. Production server still runs old code. You manually SSH and pull. Sometimes you forget.

The Solution: Auto-deploy from Git on schedule (or use webhooks for instant deploys).

# Pull from Git every 15 minutes (development server)
*/15 * * * * /usr/local/bin/git-auto-pull.sh >> /var/log/git-pull.log 2>&1

Git Auto-Pull Script:

#!/bin/bash
# File: /usr/local/bin/git-auto-pull.sh

APP_DIR="/var/www/myapp"
BRANCH="main"

cd "$APP_DIR"

# Check for changes
git fetch origin "$BRANCH"

# Compare local and remote
LOCAL=$(git rev-parse HEAD)
REMOTE=$(git rev-parse origin/$BRANCH)

if [ "$LOCAL" != "$REMOTE" ]; then
    echo "[$(date)] Changes detected. Pulling..."
    
    # Pull latest changes
    git pull origin "$BRANCH"
    
    # Run post-deploy commands
    # For Laravel:
    php artisan migrate --force
    php artisan config:cache
    php artisan route:cache
    php artisan view:cache
    
    # Reload PHP-FPM
    systemctl reload php8.2-fpm
    
    echo "[$(date)] Deploy complete."
else
    echo "[$(date)] No changes detected."
fi

Why Every 15 Minutes? Fast enough for rapid iteration. Slow enough to avoid constant deployments.

⚠️ WARNING: This is for development/staging servers ONLY. For production, use proper CI/CD pipelines (GitHub Actions, GitLab CI, Jenkins).

Pro Tip: Add Slack notifications on deploy:

# Send Slack notification
curl -X POST -H 'Content-type: application/json' \
--data '{"text":"🚀 Auto-deploy completed on staging server"}' \
https://hooks.slack.com/services/YOUR/SLACK/WEBHOOK

6. Database Optimization and Cleanup

The Pain: Database grows with junk data. Queries slow down. You manually run OPTIMIZE TABLE once a year.

The Solution: Weekly database maintenance.

# Optimize MySQL database every Sunday at 2 AM
0 2 * * 0 /usr/local/bin/optimize-database.sh

Database Optimization Script:

#!/bin/bash
# File: /usr/local/bin/optimize-database.sh

DB_NAME="myapp"
DB_USER="root"
DB_PASS="password"

# Optimize all tables
mysqlcheck -o -u "$DB_USER" -p"$DB_PASS" "$DB_NAME"

# Clean up old sessions (Laravel example)
mysql -u "$DB_USER" -p"$DB_PASS" "$DB_NAME" -e "DELETE FROM sessions WHERE last_activity < UNIX_TIMESTAMP(DATE_SUB(NOW(), INTERVAL 7 DAY))"

# Clean up old logs
mysql -u "$DB_USER" -p"$DB_PASS" "$DB_NAME" -e "DELETE FROM activity_log WHERE created_at < DATE_SUB(NOW(), INTERVAL 90 DAY)"

# Clean up failed jobs
mysql -u "$DB_USER" -p"$DB_PASS" "$DB_NAME" -e "DELETE FROM failed_jobs WHERE failed_at < DATE_SUB(NOW(), INTERVAL 30 DAY)"

echo "Database optimization complete at $(date)"

PostgreSQL Version:

#!/bin/bash
# PostgreSQL vacuum and analyze
PGPASSWORD=yourpassword psql -U postgres -d myapp -c "VACUUM ANALYZE;"

Why Sunday 2 AM? Lowest traffic time. Optimization can be CPU-intensive.

Pro Tip: Log table sizes before/after to track growth:

# Check table sizes
mysql -u root -p"$DB_PASS" -e "SELECT table_name, round(((data_length + index_length) / 1024 / 1024), 2) AS 'Size (MB)' FROM information_schema.TABLES WHERE table_schema = '$DB_NAME' ORDER BY (data_length + index_length) DESC;"

7. Monitor Disk Space and Alert

The Pain: Disk fills up silently. Site crashes. Database writes fail. Backups stop working. You find out when users report errors.

The Solution: Hourly disk space monitoring with email alerts.

# Check disk space every hour
0 * * * * /usr/local/bin/check-disk-space.sh

Disk Space Monitor Script:

#!/bin/bash
# File: /usr/local/bin/check-disk-space.sh

THRESHOLD=80  # Alert at 80% usage
EMAIL="admin@example.com"

# Get disk usage percentage (root partition)
USAGE=$(df / | tail -1 | awk '{print $5}' | sed 's/%//')

if [ $USAGE -gt $THRESHOLD ]; then
    HOSTNAME=$(hostname)
    
    # Detailed disk usage report
    REPORT=$(df -h && echo -e "\n\nLargest directories:" && du -h / --max-depth=1 2>/dev/null | sort -hr | head -10)
    
    echo "$REPORT" | mail -s "🚨 DISK SPACE ALERT: $USAGE% used on $HOSTNAME" "$EMAIL"
fi

Why Hourly? Disk can fill quickly during traffic spikes or log storms. Hourly checks catch problems early.

Pro Tip: Add automatic cleanup before alerting:

# Clean common bloat before alerting
find /var/log -name "*.log" -mtime +30 -delete
find /tmp -mtime +7 -delete
apt-get clean  # For Ubuntu/Debian

8. Security Audit and Malware Scan

The Pain: Your site gets hacked. You don't notice for weeks. Malware spreads. Google blacklists you.

The Solution: Daily security scans with ClamAV or similar tools.

# Run security scan daily at 1 AM
0 1 * * * /usr/local/bin/security-scan.sh >> /var/log/security-scan.log 2>&1

Security Scan Script:

#!/bin/bash
# File: /usr/local/bin/security-scan.sh

SCAN_DIR="/var/www"
EMAIL="security@example.com"
LOG_FILE="/var/log/malware-scan-$(date +%Y%m%d).log"

# Update virus definitions
freshclam --quiet

# Scan web directories
clamscan -r -i "$SCAN_DIR" > "$LOG_FILE"

# Check for infected files
INFECTED=$(grep "Infected files:" "$LOG_FILE" | awk '{print $3}')

if [ "$INFECTED" != "0" ]; then
    cat "$LOG_FILE" | mail -s "🔴 MALWARE DETECTED on $(hostname)" "$EMAIL"
fi

# Check for suspicious PHP files modified in last 24 hours
find "$SCAN_DIR" -name "*.php" -mtime -1 -ls >> "$LOG_FILE"

# Check for suspicious eval() usage
grep -r "eval(" "$SCAN_DIR" --include="*.php" >> "$LOG_FILE"

Alternative: File Integrity Monitoring

#!/bin/bash
# Check if core files were modified (WordPress example)
wp core verify-checksums --path=/var/www/wordpress || \
    echo "WordPress core files modified!" | mail -s "Security Alert" admin@example.com

Why Daily? Security threats don't wait. Early detection = minimal damage.

Pro Tip: Monitor failed login attempts:

# Count failed SSH login attempts in last hour
FAILED=$(grep "Failed password" /var/log/auth.log | grep "$(date +%b\ %d\ %H)" | wc -l)
if [ $FAILED -gt 10 ]; then
    echo "Too many failed login attempts: $FAILED" | mail -s "SSH Brute Force Attempt" admin@example.com
fi

9. Clean Up Old Log Files

The Pain: Log files grow forever. They consume 50GB. Disk fills up. You manually delete logs in a panic.

The Solution: Automatic log rotation and cleanup.

# Clean up logs older than 30 days, every day at midnight
0 0 * * * /usr/local/bin/cleanup-logs.sh

Log Cleanup Script:

#!/bin/bash
# File: /usr/local/bin/cleanup-logs.sh

LOG_DIRS=(
    "/var/log/nginx"
    "/var/log/apache2"
    "/var/www/myapp/storage/logs"
    "/var/log/mysql"
)

# Delete logs older than 30 days
for DIR in "${LOG_DIRS[@]}"; do
    if [ -d "$DIR" ]; then
        find "$DIR" -name "*.log" -mtime +30 -delete
        find "$DIR" -name "*.log.*" -mtime +30 -delete  # Rotated logs
        echo "Cleaned logs in $DIR"
    fi
done

# Compress logs older than 7 days (save space but keep for debugging)
for DIR in "${LOG_DIRS[@]}"; do
    if [ -d "$DIR" ]; then
        find "$DIR" -name "*.log" -mtime +7 -mtime -30 -exec gzip {} \;
    fi
done

# Truncate large active logs (don't delete, just empty)
find /var/log -name "*.log" -size +1G -exec truncate -s 0 {} \;

echo "Log cleanup complete at $(date)"

Why Midnight? Start each day fresh. Logs from 30+ days ago are rarely needed.

Pro Tip: Use logrotate for more sophisticated rotation:

# /etc/logrotate.d/custom-app
/var/www/myapp/storage/logs/*.log {
    daily
    rotate 30
    compress
    delaycompress
    missingok
    notifempty
}

10. Health Check and Uptime Monitoring

The Pain: Your site goes down. You don't know until customers complain. Revenue lost. Trust damaged.

The Solution: Health checks every 5 minutes with instant alerts.

# Check site health every 5 minutes
*/5 * * * * /usr/local/bin/health-check.sh

Health Check Script:

#!/bin/bash
# File: /usr/local/bin/health-check.sh

SITE_URL="https://yoursite.com"
HEALTH_ENDPOINT="https://yoursite.com/health"
EMAIL="oncall@example.com"
ALERT_FILE="/tmp/site-down-alert-sent"

# Check main site
HTTP_CODE=$(curl -s -o /dev/null -w "%{http_code}" "$SITE_URL")

if [ "$HTTP_CODE" != "200" ]; then
    # Site is down
    
    # Only send alert once (avoid spam)
    if [ ! -f "$ALERT_FILE" ]; then
        echo "Site returned HTTP $HTTP_CODE at $(date)" | mail -s "🚨 SITE DOWN: $SITE_URL" "$EMAIL"
        touch "$ALERT_FILE"
    fi
else
    # Site is up - remove alert flag
    rm -f "$ALERT_FILE"
fi

# Check health endpoint (for API/database status)
HEALTH_STATUS=$(curl -s "$HEALTH_ENDPOINT" | jq -r '.status' 2>/dev/null)

if [ "$HEALTH_STATUS" != "ok" ]; then
    echo "Health check failed: $HEALTH_STATUS" | mail -s "⚠️ Health Check Failed" "$EMAIL"
fi

# Check database connectivity
if ! mysqladmin ping -h localhost --silent; then
    echo "MySQL is not responding" | mail -s "🔴 Database Down" "$EMAIL"
fi

# Check Redis
if ! redis-cli ping > /dev/null 2>&1; then
    echo "Redis is not responding" | mail -s "🔴 Redis Down" "$EMAIL"
fi

Why Every 5 Minutes? Fast enough to catch issues quickly. Not so fast that it hammers your server.

Pro Tip: Add response time monitoring:

# Measure response time
RESPONSE_TIME=$(curl -o /dev/null -s -w '%{time_total}' "$SITE_URL")

# Alert if response time > 3 seconds
if (( $(echo "$RESPONSE_TIME > 3.0" | bc -l) )); then
    echo "Slow response: ${RESPONSE_TIME}s" | mail -s "⚠️ Slow Site Performance" "$EMAIL"
fi

Even Better: Use dedicated monitoring services (better than cron for uptime):

  • UptimeRobot (free tier available)
  • Pingdom
  • Uptime.com
  • Better Uptime

But this cron job is perfect for internal health checks and database connectivity monitoring.

Bonus: Complete Web Developer Crontab

Here's all 10 cron jobs combined in one production-ready crontab:

# Edit with: crontab -e

SHELL=/bin/bash
PATH=/usr/local/bin:/usr/bin:/bin
MAILTO=admin@example.com

# 1. SSL Certificate Auto-Renewal (twice daily)
30 2,14 * * * /usr/bin/certbot renew --quiet --deploy-hook "systemctl reload nginx"

# 2. Clear Application Cache (daily at 3 AM)
0 3 * * * cd /var/www/myapp && php artisan cache:clear && php artisan view:clear

# 3. Generate XML Sitemap (daily at 4 AM)
0 4 * * * /usr/local/bin/generate-sitemap.sh

# 4. Check for Broken Links (weekly on Monday at 5 AM)
0 5 * * 1 /usr/local/bin/check-broken-links.sh

# 5. Pull from Git (every 15 minutes - dev/staging only)
*/15 * * * * /usr/local/bin/git-auto-pull.sh >> /var/log/git-pull.log 2>&1

# 6. Database Optimization (weekly on Sunday at 2 AM)
0 2 * * 0 /usr/local/bin/optimize-database.sh

# 7. Disk Space Monitoring (hourly)
0 * * * * /usr/local/bin/check-disk-space.sh

# 8. Security Scan (daily at 1 AM)
0 1 * * * /usr/local/bin/security-scan.sh >> /var/log/security-scan.log 2>&1

# 9. Log Cleanup (daily at midnight)
0 0 * * * /usr/local/bin/cleanup-logs.sh

# 10. Health Check (every 5 minutes)
*/5 * * * * /usr/local/bin/health-check.sh

Quick Start: Get These Running in 10 Minutes

Step 1: Create the scripts directory

sudo mkdir -p /usr/local/bin
sudo mkdir -p /var/log

Step 2: Copy the scripts you need

Create each script file (copy from above), save to /usr/local/bin/, and make executable:

sudo nano /usr/local/bin/check-disk-space.sh
# Paste script content
# Save and exit (Ctrl+X, Y, Enter)

sudo chmod +x /usr/local/bin/check-disk-space.sh

Step 3: Test each script manually

/usr/local/bin/check-disk-space.sh

Verify it works before adding to cron.

Step 4: Add to crontab

crontab -e

Paste the cron jobs you want. Save and exit.

Step 5: Verify cron jobs are scheduled

crontab -l

Step 6: Monitor cron execution

tail -f /var/log/syslog | grep CRON

Watch your cron jobs run in real-time.

Common Mistakes to Avoid

Using relative paths - Always use absolute paths in cron jobs
Forgetting to make scripts executable - chmod +x is required
No error logging - Always redirect output: >> /var/log/script.log 2>&1
Not testing scripts manually first - Test before scheduling
Hardcoding passwords - Use environment variables or config files
No email alerts - Set MAILTO in crontab
Running all jobs at midnight - Stagger times to avoid server load spikes
Not monitoring cron execution - Check /var/log/syslog regularly

Conclusion: Automate Everything

These 10 cron jobs handle the repetitive tasks that consume your time and create risk:

SSL renewals - Never manually renew again
Cache management - Site stays fast automatically
SEO optimization - Sitemaps always fresh
Link checking - Catch 404s before users do
Auto-deployment - Code deploys itself
Database health - Optimized and clean automatically
Disk monitoring - Catch space issues early
Security scanning - Detect threats daily
Log management - Never run out of space
Uptime monitoring - Know about problems instantly

The best part? Set these up once and forget about them. They run reliably, day after day, protecting your sites and freeing you to build new features.

Your next steps:

  1. Pick 2-3 jobs that solve your biggest pains right now
  2. Create the scripts (copy from above)
  3. Test them manually
  4. Add to crontab
  5. Monitor for 24 hours to ensure they work
  6. Add the rest when ready

Ready to build the perfect schedule for your automation? Use our Cron Expression Generator to create and test your cron schedules with confidence.


Related Articles

Get Started:

Solve Common Issues:

Advanced Automation:


Keywords: cron job examples, cron for web developers, useful cron jobs, cron automation scripts, web developer automation, cron best practices, automate web development tasks