Back to Blog
Tutorial

Where Did My Cron Job Output Go? A Guide to Logging & Monitoring

Stop wondering if your cron jobs ran successfully. Learn professional logging practices, output redirection, log rotation, and monitoring tools to know exactly what happened and when.

11 min read
By Cron Generator Team

You set up a cron job. You wait. Time passes. Did it run? Did it succeed? Did it fail silently?

You have no idea.

This is the "fire and forget" approach to cron jobs. You configure them, hope they work, and only discover problems when something breaks catastrophically—often weeks or months later.

Professional developers don't work this way.

They log everything. They monitor execution. They get alerts when things fail. They can answer "Did the backup run last night?" in 5 seconds by checking a log file.

This guide teaches you how to move from amateur "hope it works" to professional "know it works" with proper logging, output redirection, log rotation, and monitoring.

The Black Hole: Where Cron Output Goes by Default

First, understand what happens to your cron job's output if you don't configure logging.

Default Behavior: Email to User

When a cron job produces output (anything written to stdout or stderr), the cron daemon tries to email it to the user.

Example cron job:

0 2 * * * /home/user/backup.sh

When backup.sh runs and prints:

Starting backup...
Backup completed successfully

What happens:

  1. Cron captures this output
  2. Cron tries to send it via local mail to the user
  3. If local mail is configured (rare on modern systems), you get an email
  4. If local mail is NOT configured (common), the output disappears into the void

The Silent Failure Problem

Most modern servers don't have local mail configured. This means:

❌ Output vanishes completely
❌ Errors disappear without a trace
❌ You have no record of execution
❌ You can't debug when things break
❌ You don't know if the job even ran

Example of a silent failure:

# Cron job runs at 2 AM
0 2 * * * /home/user/critical-backup.sh

The script fails with:

Error: Database connection refused
Backup failed: disk full
Permission denied: /var/backups/

You never see these errors. The job fails silently every night. You discover the problem three months later when you need to restore and realize there are no backups.

Checking Local Mail (Rarely Works)

On systems with local mail configured, check it with:

mail

Or:

cat /var/mail/$USER

But don't rely on this. It's not a professional solution. Most production systems don't have local mail configured, and even if they do, email isn't searchable, version-controlled, or easily automated.

The MAILTO Variable

You can configure where cron sends output:

# In crontab -e
MAILTO=admin@example.com

0 2 * * * /home/user/backup.sh

Problems with MAILTO:

  • ❌ Requires working email server configuration
  • ❌ Inbox gets flooded with successful job output
  • ❌ Important errors buried among noise
  • ❌ Can't search/analyze historical data
  • ❌ No log retention policy

Or disable email entirely:

MAILTO=""
0 2 * * * /home/user/backup.sh

This silences everything. Now you have zero visibility.

The professional solution: Explicit logging to files.

Redirecting Output: Taking Control

Stop relying on email. Redirect output to log files where you control format, location, and retention.

The Basic Syntax

command >> /path/to/logfile.log 2>&1

This is the single most important cron pattern to learn.

Let's break down exactly what each part means.

Breaking Down the Redirection Syntax

command >> /path/to/logfile.log 2>&1
│       │  │                    │
│       │  │                    └─ Redirect stderr (2) to stdout (1)
│       │  └────────────────────── Path to log file
│       └───────────────────────── Append operator
└───────────────────────────────── Your command

Part 1: >> (Append Operator)

>> = Append to file (creates file if it doesn't exist)

Comparison:

| Operator | Behavior | Use Case | |----------|----------|----------| | > | Overwrites file each time | Single execution, want latest result only | | >> | Appends to file each time | Ongoing log, want history |

Example with > (overwrites):

0 2 * * * /home/user/backup.sh > /var/log/backup.log

Result: Each run replaces the entire log. You only see the most recent execution.

Example with >> (appends):

0 2 * * * /home/user/backup.sh >> /var/log/backup.log

Result: Each run adds to the log. You see complete history of all executions.

For cron jobs, almost always use >> to maintain history.

Part 2: 2>&1 (Redirect stderr to stdout)

Understanding file descriptors:

| Descriptor | Name | Meaning | |------------|------|---------| | 0 | stdin | Standard input | | 1 | stdout | Standard output (normal output) | | 2 | stderr | Standard error (error messages) |

2>&1 means: "Send stderr (2) to the same place as stdout (1)"

Without 2>&1:

command >> /var/log/output.log
  • ✅ Normal output goes to log file
  • ❌ Error messages still go to email/void

With 2>&1:

command >> /var/log/output.log 2>&1
  • ✅ Normal output goes to log file
  • ✅ Error messages go to log file
  • ✅ Everything in one place

Visual Example: What Gets Logged

Script that produces both stdout and stderr:

#!/bin/bash
echo "Starting backup..."          # stdout
echo "ERROR: Disk full" >&2         # stderr
echo "Backup completed"             # stdout

Cron job WITHOUT 2>&1:

0 2 * * * /home/user/backup.sh >> /var/log/backup.log

Log file contents:

Starting backup...
Backup completed

The error message disappeared! It went to email/void.

Cron job WITH 2>&1:

0 2 * * * /home/user/backup.sh >> /var/log/backup.log 2>&1

Log file contents:

Starting backup...
ERROR: Disk full
Backup completed

Everything is captured. This is what you want.

Alternative: Separate Logs for stdout and stderr

For more granular control:

0 2 * * * /home/user/backup.sh >> /var/log/backup.log 2>> /var/log/backup-error.log
  • Normal output → /var/log/backup.log
  • Errors → /var/log/backup-error.log

Use case: Quickly check if there were any errors by checking file size:

# If error log is empty, no problems occurred
ls -lh /var/log/backup-error.log

Discarding Output Selectively

Discard stdout, keep stderr:

0 2 * * * /home/user/backup.sh > /dev/null 2>> /var/log/backup-errors.log

Discard stderr, keep stdout:

0 2 * * * /home/user/backup.sh >> /var/log/backup.log 2> /dev/null

Discard everything (not recommended):

0 2 * * * /home/user/backup.sh > /dev/null 2>&1

This is the "I don't care what happens" approach. Avoid unless the job is truly non-critical and extremely noisy.

Complete Cron Job with Logging

Professional pattern:

# In crontab -e
SHELL=/bin/bash
PATH=/usr/local/bin:/usr/bin:/bin
MAILTO=""

# Database backup with full logging
0 2 * * * /usr/local/bin/backup-database.sh >> /var/log/backup.log 2>&1

# Clear cache with error-only logging
0 3 * * * /usr/local/bin/clear-cache.sh > /dev/null 2>> /var/log/cache-errors.log

# Generate reports with timestamped logging
0 4 * * * /usr/local/bin/generate-reports.sh >> /var/log/reports-$(date +\%Y\%m\%d).log 2>&1

Note the variables at the top:

  • SHELL=/bin/bash - Ensures bash features work
  • PATH=... - Defines where to find commands
  • MAILTO="" - Disables email (we're using file logging)

Best Practices: Professional Logging

Now that you understand redirection, implement these professional practices.

Practice 1: Add Timestamps to Log Entries

Basic logging (no timestamps):

#!/bin/bash
echo "Starting backup"
# ... backup commands ...
echo "Backup complete"

Log output:

Starting backup
Backup complete

Problem: You can't tell when this ran or how long it took.

Professional logging (with timestamps):

#!/bin/bash
log() {
    echo "[$(date '+%Y-%m-%d %H:%M:%S')] $*"
}

log "Starting backup"
# ... backup commands ...
log "Backup complete"

Log output:

[2025-01-09 02:00:01] Starting backup
[2025-01-09 02:05:47] Backup complete

Now you know: Backup started at 2:00 AM and completed in ~6 minutes.

Practice 2: Log Success AND Failure

Amateur approach:

#!/bin/bash
/usr/bin/mysqldump -u root mydb > /backups/db.sql

If this fails, you get no indication.

Professional approach:

#!/bin/bash
log() {
    echo "[$(date '+%Y-%m-%d %H:%M:%S')] $*"
}

log "Starting database backup"

if /usr/bin/mysqldump -u root mydb > /backups/db.sql 2>&1; then
    log "SUCCESS: Database backup completed"
    log "Backup size: $(du -h /backups/db.sql | cut -f1)"
else
    log "ERROR: Database backup failed with exit code $?"
    exit 1
fi

log "Backup process finished"

Log output on success:

[2025-01-09 02:00:01] Starting database backup
[2025-01-09 02:05:47] SUCCESS: Database backup completed
[2025-01-09 02:05:47] Backup size: 245M
[2025-01-09 02:05:47] Backup process finished

Log output on failure:

[2025-01-09 02:00:01] Starting database backup
[2025-01-09 02:00:03] ERROR: Database backup failed with exit code 1

Now you can grep for "ERROR" or "SUCCESS" to quickly audit results.

Practice 3: Log Rotation to Prevent Disk Filling

Logs grow forever. A job running every 5 minutes generates 105,120 log entries per year. This will fill your disk.

Solution: Log rotation

Option A: Use logrotate (Recommended)

Create a logrotate configuration:

sudo nano /etc/logrotate.d/custom-cron

Configuration:

/var/log/backup.log {
    daily
    rotate 30
    compress
    delaycompress
    missingok
    notifempty
    create 0640 root adm
}

What this does:

| Directive | Meaning | |-----------|---------| | daily | Rotate logs once per day | | rotate 30 | Keep 30 days of logs (delete older) | | compress | Compress old logs with gzip | | delaycompress | Don't compress the most recent rotation | | missingok | Don't error if log file doesn't exist | | notifempty | Don't rotate if log is empty | | create 0640 root adm | Create new log with these permissions |

Result: You keep 30 days of logs, compressed to save space, then old logs are automatically deleted.

Other rotation frequencies:

weekly      # Rotate once per week
monthly     # Rotate once per month
size 100M   # Rotate when file reaches 100MB

Option B: Date-Based Log Files

Create a new log file each day:

# In crontab
0 2 * * * /usr/local/bin/backup.sh >> /var/log/backup-$(date +\%Y-\%m-\%d).log 2>&1

Note the escaped %: Must be \% in crontab.

Result: One log file per day:

backup-2025-01-09.log
backup-2025-01-10.log
backup-2025-01-11.log

Then clean up old logs:

# Delete logs older than 30 days
0 3 * * * find /var/log -name "backup-*.log" -mtime +30 -delete

Option C: Limit Log File Size

Keep only the most recent N lines:

#!/bin/bash
LOG_FILE="/var/log/backup.log"
MAX_LINES=10000

# Your script output
echo "[$(date)] Starting backup"
# ... backup commands ...

# Trim log file to last 10,000 lines
tail -n $MAX_LINES "$LOG_FILE" > "$LOG_FILE.tmp" && mv "$LOG_FILE.tmp" "$LOG_FILE"

Or in the cron job itself:

0 2 * * * /usr/local/bin/backup.sh >> /var/log/backup.log 2>&1; tail -n 10000 /var/log/backup.log > /var/log/backup.log.tmp && mv /var/log/backup.log.tmp /var/log/backup.log

Practice 4: Structured Logging for Easy Parsing

Use consistent log formats that can be parsed by tools.

JSON logging:

#!/bin/bash
log_json() {
    local level=$1
    local message=$2
    echo "{\"timestamp\":\"$(date -u +%Y-%m-%dT%H:%M:%SZ)\",\"level\":\"$level\",\"message\":\"$message\"}"
}

log_json "INFO" "Starting backup"
# ... backup commands ...
log_json "SUCCESS" "Backup completed"

Output:

{"timestamp":"2025-01-09T02:00:01Z","level":"INFO","message":"Starting backup"}
{"timestamp":"2025-01-09T02:05:47Z","level":"SUCCESS","message":"Backup completed"}

Benefits:

  • Easy to parse with jq or log analysis tools
  • Can be ingested by Elasticsearch, Splunk, etc.
  • Filterable by level, timestamp, or message

Syslog format:

#!/bin/bash
logger -t backup-script "Starting database backup"
# ... backup commands ...
logger -t backup-script "Backup completed successfully"

This writes to system syslog (/var/log/syslog or /var/log/messages), integrated with system logs.

Practice 5: Include Context in Logs

Poor logging:

Backup started
Backup completed

Professional logging:

[2025-01-09 02:00:01] Backup started for database: production_db
[2025-01-09 02:00:01] Backup target: /backups/prod_db_20250109_020001.sql.gz
[2025-01-09 02:05:47] Backup completed successfully
[2025-01-09 02:05:47] Backup size: 245MB
[2025-01-09 02:05:47] Backup location: /backups/prod_db_20250109_020001.sql.gz
[2025-01-09 02:05:47] Execution time: 346 seconds

Context to include:

  • ✅ What resource/database/service
  • ✅ Input parameters or configuration
  • ✅ Output location or file names
  • ✅ File sizes or record counts
  • ✅ Execution duration
  • ✅ Exit codes or status

Practice 6: Create a Logging Helper Library

Instead of duplicating logging code, create a reusable library:

# File: /usr/local/lib/cron-logger.sh
#!/bin/bash

LOG_FILE="${LOG_FILE:-/var/log/cron-jobs.log}"
LOG_LEVEL="${LOG_LEVEL:-INFO}"

log() {
    local level=$1
    shift
    local message="$*"
    local timestamp=$(date '+%Y-%m-%d %H:%M:%S')
    echo "[$timestamp] [$level] $message" >> "$LOG_FILE"
}

log_info() { log "INFO" "$@"; }
log_success() { log "SUCCESS" "$@"; }
log_warning() { log "WARNING" "$@"; }
log_error() { log "ERROR" "$@"; }

log_start() {
    log "INFO" "=== Starting: $1 ==="
}

log_end() {
    local exit_code=$?
    if [ $exit_code -eq 0 ]; then
        log "SUCCESS" "=== Completed: $1 (exit code: $exit_code) ==="
    else
        log "ERROR" "=== Failed: $1 (exit code: $exit_code) ==="
    fi
    return $exit_code
}

Use in your scripts:

#!/bin/bash
source /usr/local/lib/cron-logger.sh

LOG_FILE="/var/log/backup.log"

log_start "Database Backup"

log_info "Connecting to database: production_db"

if /usr/bin/mysqldump -u root production_db > /backups/db.sql 2>&1; then
    log_success "Database dump completed"
    log_info "Backup size: $(du -h /backups/db.sql | cut -f1)"
else
    log_error "Database dump failed"
    log_end "Database Backup"
    exit 1
fi

log_info "Compressing backup..."
gzip /backups/db.sql

log_end "Database Backup"

Monitoring Tools: Know When Things Fail

Logging is reactive (check logs after the fact). Monitoring is proactive (get alerted immediately when something fails).

Level 1: Email Alerts on Failure

Send email only when a job fails:

#!/bin/bash
LOG_FILE="/var/log/backup.log"

log() {
    echo "[$(date '+%Y-%m-%d %H:%M:%S')] $*" | tee -a "$LOG_FILE"
}

log "Starting backup"

if /usr/bin/mysqldump -u root mydb > /backups/db.sql 2>&1; then
    log "SUCCESS: Backup completed"
else
    log "ERROR: Backup failed"
    echo "Backup failed on $(hostname) at $(date)" | mail -s "BACKUP FAILURE" admin@example.com
    exit 1
fi

Or use cron's exit code:

# In crontab - email only on failure
MAILTO=admin@example.com

0 2 * * * /usr/local/bin/backup.sh >> /var/log/backup.log 2>&1 || echo "Backup script failed on $(hostname)"

The || echo ... part only executes if the script returns a non-zero exit code (failure).

Level 2: Dead Man's Switch Monitoring

Services that expect a "ping" from your cron job. If they don't receive it, they alert you.

Healthchecks.io (Recommended)

1. Create a check on healthchecks.io

You get a unique URL like:

https://hc-ping.com/abc123-def456-ghi789

2. Ping it when your job succeeds:

#!/bin/bash
# Your backup script
/usr/bin/mysqldump -u root mydb > /backups/db.sql

# Ping healthchecks.io on success
curl -fsS -m 10 --retry 5 https://hc-ping.com/abc123-def456-ghi789

3. Configure the expected schedule:

Set "Every 1 day" in healthchecks.io dashboard.

4. Get alerted if ping doesn't arrive:

If your job fails or doesn't run, you don't ping healthchecks.io. After the grace period, you get an email/SMS/Slack alert.

Advanced: Report failures explicitly:

#!/bin/bash
HEALTHCHECK_URL="https://hc-ping.com/abc123-def456-ghi789"

if /usr/bin/mysqldump -u root mydb > /backups/db.sql 2>&1; then
    # Success - ping success endpoint
    curl -fsS -m 10 --retry 5 "$HEALTHCHECK_URL"
else
    # Failure - ping failure endpoint
    curl -fsS -m 10 --retry 5 "$HEALTHCHECK_URL/fail"
fi

Benefits:

  • ✅ Know immediately when jobs fail
  • ✅ Know if jobs stop running entirely
  • ✅ Visual dashboard of all cron jobs
  • ✅ Free tier available

Cronitor

Similar to Healthchecks.io but with more features:

# Install cronitor CLI
curl -s https://cronitor.io/install | sh

# Wrap your cron job
0 2 * * * cronitor exec abc123 /usr/local/bin/backup.sh

Cronitor monitors:

  • Execution time (alerts on slow jobs)
  • Exit codes (alerts on failures)
  • Missing executions (alerts if job doesn't run)

UptimeRobot / Better Uptime

Use their "Heartbeat" monitoring:

# Ping heartbeat URL after successful execution
curl https://heartbeat.uptimerobot.com/abc123

Level 3: Centralized Log Management

For larger infrastructures, aggregate all logs in one place.

Syslog to Remote Server

Send logs to a centralized syslog server:

#!/bin/bash
# Log locally AND to remote syslog
log_dual() {
    local message="$*"
    echo "[$(date)] $message" >> /var/log/backup.log
    logger -n syslog.example.com -P 514 -t backup-script "$message"
}

log_dual "Starting backup"
# ... backup commands ...
log_dual "Backup completed"

ELK Stack (Elasticsearch, Logstash, Kibana)

  1. Ship logs with Filebeat:
# /etc/filebeat/filebeat.yml
filebeat.inputs:
- type: log
  enabled: true
  paths:
    - /var/log/backup.log
    - /var/log/cron-*.log
  
output.elasticsearch:
  hosts: ["localhost:9200"]
  1. Query and visualize in Kibana

Cloud Logging Services

AWS CloudWatch:

# Install AWS CloudWatch agent
# Configure log group
# Logs automatically stream to CloudWatch

Datadog:

# Install Datadog agent
# Configure log collection
# View logs in Datadog dashboard

Splunk / Sumo Logic:

Similar setup—install agent, configure log paths, view in web dashboard.

Level 4: Metrics and Dashboards

Track cron job metrics over time:

Prometheus + Grafana:

Export metrics from your cron jobs:

#!/bin/bash
# File: /usr/local/bin/backup-with-metrics.sh

METRICS_FILE="/var/lib/node_exporter/textfile_collector/backup.prom"

START_TIME=$(date +%s)

if /usr/bin/mysqldump -u root mydb > /backups/db.sql 2>&1; then
    SUCCESS=1
else
    SUCCESS=0
fi

END_TIME=$(date +%s)
DURATION=$((END_TIME - START_TIME))
BACKUP_SIZE=$(stat -f%z /backups/db.sql 2>/dev/null || stat -c%s /backups/db.sql)

# Write Prometheus metrics
cat > "$METRICS_FILE" <<EOF
# HELP backup_success Whether backup succeeded (1) or failed (0)
# TYPE backup_success gauge
backup_success $SUCCESS

# HELP backup_duration_seconds Time taken to complete backup
# TYPE backup_duration_seconds gauge
backup_duration_seconds $DURATION

# HELP backup_size_bytes Size of backup file in bytes
# TYPE backup_size_bytes gauge
backup_size_bytes $BACKUP_SIZE

# HELP backup_last_run_timestamp Unix timestamp of last backup
# TYPE backup_last_run_timestamp gauge
backup_last_run_timestamp $END_TIME
EOF

Then create Grafana dashboards:

  • Backup success rate over time
  • Average backup duration
  • Backup file size trends
  • Time since last successful backup

Complete Professional Cron Setup

Here's a production-ready example combining all best practices:

The Script

#!/bin/bash
# File: /usr/local/bin/backup-database.sh

set -euo pipefail  # Exit on errors, undefined variables, pipe failures

# Configuration
DB_NAME="production_db"
BACKUP_DIR="/var/backups/mysql"
LOG_FILE="/var/log/backup-database.log"
RETENTION_DAYS=30
HEALTHCHECK_URL="https://hc-ping.com/abc123-def456-ghi789"

# Logging function
log() {
    echo "[$(date '+%Y-%m-%d %H:%M:%S')] $*" | tee -a "$LOG_FILE"
}

# Start
START_TIME=$(date +%s)
BACKUP_FILE="$BACKUP_DIR/${DB_NAME}_$(date +%Y%m%d_%H%M%S).sql.gz"

log "========================================="
log "Starting database backup: $DB_NAME"
log "Backup destination: $BACKUP_FILE"

# Create backup directory
mkdir -p "$BACKUP_DIR"

# Perform backup
log "Dumping database..."
if /usr/bin/mysqldump \
    --single-transaction \
    --routines \
    --triggers \
    --events \
    "$DB_NAME" | gzip > "$BACKUP_FILE"; then
    
    log "SUCCESS: Database dump completed"
else
    log "ERROR: Database dump failed with exit code $?"
    curl -fsS -m 10 --retry 3 "$HEALTHCHECK_URL/fail"
    exit 1
fi

# Verify backup
if [ ! -s "$BACKUP_FILE" ]; then
    log "ERROR: Backup file is empty or missing"
    curl -fsS -m 10 --retry 3 "$HEALTHCHECK_URL/fail"
    exit 1
fi

BACKUP_SIZE=$(du -h "$BACKUP_FILE" | cut -f1)
log "Backup size: $BACKUP_SIZE"

# Cleanup old backups
log "Cleaning up backups older than $RETENTION_DAYS days..."
DELETED=$(find "$BACKUP_DIR" -name "${DB_NAME}_*.sql.gz" -mtime +$RETENTION_DAYS -delete -print | wc -l)
log "Deleted $DELETED old backup(s)"

# Calculate duration
END_TIME=$(date +%s)
DURATION=$((END_TIME - START_TIME))
log "Backup completed in $DURATION seconds"

# Ping monitoring service
curl -fsS -m 10 --retry 3 "$HEALTHCHECK_URL" || log "WARNING: Failed to ping healthcheck"

log "========================================="

exit 0

The Crontab

# Edit with: crontab -e

SHELL=/bin/bash
PATH=/usr/local/bin:/usr/bin:/bin
MAILTO=""

# Database backup daily at 2 AM
0 2 * * * /usr/local/bin/backup-database.sh >> /var/log/backup-database.log 2>&1

The Logrotate Configuration

# File: /etc/logrotate.d/backup-database

/var/log/backup-database.log {
    daily
    rotate 60
    compress
    delaycompress
    missingok
    notifempty
    create 0640 root adm
}

What You Get

Complete logging - Every action logged with timestamps
Log rotation - Keeps 60 days, compressed
Error handling - Explicit success/failure checks
Monitoring - Healthchecks.io integration
Verification - Checks backup file isn't empty
Cleanup - Auto-deletes old backups
Metrics - Duration and size logged
Debuggable - Can trace exactly what happened

Troubleshooting: When Logs Don't Appear

Problem: You added logging but the log file is empty.

Check 1: File Permissions

# Can the cron user write to the log file?
ls -l /var/log/backup.log

# If file doesn't exist, create it
touch /var/log/backup.log
chmod 666 /var/log/backup.log  # Writable by all (or set specific user)

Check 2: Directory Permissions

# Can the cron user write to the log directory?
ls -ld /var/log/

# Make directory writable
chmod 755 /var/log/

Check 3: Script Syntax

# Test the redirection syntax
echo "test" >> /var/log/backup.log 2>&1
cat /var/log/backup.log

Check 4: Cron Execution

# Verify cron is running the job
grep CRON /var/log/syslog | grep backup

Check 5: Test as Cron User

# If cron runs as www-data, test as www-data
sudo -u www-data /usr/local/bin/backup.sh >> /var/log/backup.log 2>&1

Quick Reference: Logging Patterns

| Pattern | Use Case | |---------|----------| | >> log.log 2>&1 | Append all output to log | | > log.log 2>&1 | Overwrite log with latest output | | >> out.log 2>> err.log | Separate logs for output and errors | | > /dev/null 2>&1 | Discard all output (silence) | | 2>> err.log > /dev/null | Log only errors | | >> log-$(date +\%Y\%m\%d).log 2>&1 | Date-based log files |

Conclusion: From "Fire and Forget" to "Monitor and Confirm"

Amateur approach:

0 2 * * * /home/user/backup.sh

No logging. No monitoring. No confirmation. Hope it works.

Professional approach:

0 2 * * * /usr/local/bin/backup.sh >> /var/log/backup.log 2>&1

With proper logging:

  • ✅ Timestamped execution records
  • ✅ Success/failure tracking
  • ✅ Searchable history
  • ✅ Debugging capability

With monitoring:

  • ✅ Instant alerts on failure
  • ✅ Dead man's switch detection
  • ✅ Metrics and trends
  • ✅ Dashboard visibility

The transformation:

| Question | Amateur | Professional | |----------|---------|--------------| | "Did the backup run last night?" | "I think so?" | "Yes, at 2:00:14 AM" | | "Did it succeed?" | "No idea" | "Yes, 245MB, 347 seconds" | | "When did it last fail?" | "Unknown" | "Never in 60 days" | | "Why did it fail?" | "Can't tell" | "Disk full—see log line 47" |

Your next steps:

  1. Add logging to all cron jobs: >> /var/log/jobname.log 2>&1
  2. Implement log rotation: Create /etc/logrotate.d/ configs
  3. Add timestamps: Use date in your scripts
  4. Set up monitoring: Use Healthchecks.io for critical jobs
  5. Review logs regularly: tail -f /var/log/*.log

Ready to build cron jobs with professional logging? Use our Cron Expression Generator to create reliable schedules with confidence.


Related Articles

Cron Troubleshooting:

Best Practices:

Get Started:


Keywords: cron job logging, cron output to file, crontab log file, monitor cron jobs, cron log rotation, cron stderr redirect, cron job monitoring, healthchecks cron, cron logging best practices