Back to Blog
Tutorial

PostgreSQL Cron Jobs with pg_cron: Complete Setup Guide (2025)

Run cron jobs directly inside PostgreSQL with pg_cron extension. Schedule SQL queries, maintenance tasks, and data processing without external scripts. Production-ready examples included.

13 min read
By Cron Generator Team

You need to run a SQL query every hour. Vacuum tables daily. Archive old data weekly.

The traditional approach: Write shell scripts, configure system cron, deal with database connections, handle authentication, debug PATH issues.

The better way: Run cron jobs directly inside PostgreSQL with pg_cron.

No external scripts. No cron syntax to remember. Just SQL that runs on schedule, managed entirely within your database.

This guide shows you exactly how to set up and use pg_cron for production PostgreSQL automation.

What is pg_cron?

pg_cron is a PostgreSQL extension that adds cron-like job scheduling directly to your database.

What you can do:

  • Schedule SQL queries to run periodically
  • Automate database maintenance (VACUUM, ANALYZE)
  • Archive old data automatically
  • Generate materialized views on schedule
  • Call stored procedures at specific times
  • Anything you can do in SQL

Why it's better than external cron:

  • ✅ No shell scripts needed
  • ✅ Database handles authentication
  • ✅ Jobs logged in database tables
  • ✅ Manage everything from SQL
  • ✅ No PATH or environment issues
  • ✅ Transactions work correctly

Installing pg_cron

On Ubuntu/Debian

# Add PostgreSQL repository (if needed)
sudo sh -c 'echo "deb http://apt.postgresql.org/pub/repos/apt $(lsb_release -cs)-pgdg main" > /etc/apt/sources.list.d/pgdg.list'
wget --quiet -O - https://www.postgresql.org/media/keys/ACCC4CF8.asc | sudo apt-key add -
sudo apt update

# Install pg_cron
sudo apt install postgresql-15-cron

# Or for PostgreSQL 14
sudo apt install postgresql-14-cron

On macOS (Homebrew)

brew install postgresql
brew install pg_cron

On Docker

FROM postgres:15

# Install pg_cron
RUN apt-get update && apt-get install -y postgresql-15-cron

# Add to postgresql.conf
RUN echo "shared_preload_libraries = 'pg_cron'" >> /usr/share/postgresql/postgresql.conf.sample
RUN echo "cron.database_name = 'postgres'" >> /usr/share/postgresql/postgresql.conf.sample

Configuration

Edit postgresql.conf:

# Add pg_cron to shared libraries
shared_preload_libraries = 'pg_cron'

# Specify which database contains cron jobs
cron.database_name = 'postgres'

Restart PostgreSQL:

sudo systemctl restart postgresql

Enable the extension:

-- Connect to the database specified in cron.database_name
\c postgres

-- Create extension
CREATE EXTENSION pg_cron;

Verify installation:

SELECT * FROM cron.job;

If you see an empty table (no errors), you're ready!

Your First pg_cron Job

Let's start simple: run a query every minute.

-- Create a test table
CREATE TABLE cron_test (
    id SERIAL PRIMARY KEY,
    run_at TIMESTAMP DEFAULT NOW()
);

-- Schedule a job to insert a row every minute
SELECT cron.schedule(
    'test-job',           -- Job name
    '* * * * *',          -- Cron schedule (every minute)
    'INSERT INTO cron_test (run_at) VALUES (NOW())'
);

Check if it's running:

-- Wait a couple minutes, then check
SELECT * FROM cron_test ORDER BY run_at DESC LIMIT 10;

You should see new rows appearing every minute.

Stop the job:

SELECT cron.unschedule('test-job');

Common pg_cron Patterns

1. Daily Database Maintenance

VACUUM and ANALYZE every night at 2 AM:

SELECT cron.schedule(
    'nightly-vacuum',
    '0 2 * * *',
    $$
    VACUUM ANALYZE;
    $$
);

Per-table maintenance:

-- Vacuum large table daily at 3 AM
SELECT cron.schedule(
    'vacuum-users-table',
    '0 3 * * *',
    'VACUUM ANALYZE users'
);

-- Vacuum analytics table at 4 AM
SELECT cron.schedule(
    'vacuum-analytics',
    '0 4 * * *',
    'VACUUM ANALYZE page_views'
);

2. Refresh Materialized Views

Update aggregated data every hour:

-- Create materialized view
CREATE MATERIALIZED VIEW hourly_sales_summary AS
SELECT 
    date_trunc('hour', created_at) as hour,
    COUNT(*) as order_count,
    SUM(total) as revenue
FROM orders
WHERE created_at > NOW() - INTERVAL '7 days'
GROUP BY date_trunc('hour', created_at);

-- Refresh every hour
SELECT cron.schedule(
    'refresh-sales-summary',
    '0 * * * *',
    'REFRESH MATERIALIZED VIEW CONCURRENTLY hourly_sales_summary'
);

Daily summary refresh:

-- Refresh daily at 6 AM
SELECT cron.schedule(
    'refresh-daily-stats',
    '0 6 * * *',
    $$
    REFRESH MATERIALIZED VIEW CONCURRENTLY daily_revenue;
    REFRESH MATERIALIZED VIEW CONCURRENTLY daily_user_stats;
    REFRESH MATERIALIZED VIEW CONCURRENTLY daily_product_sales;
    $$
);

3. Archive Old Data

Move old records to archive table weekly:

-- Create archive table
CREATE TABLE orders_archive (LIKE orders INCLUDING ALL);

-- Archive orders older than 1 year every Sunday at 1 AM
SELECT cron.schedule(
    'archive-old-orders',
    '0 1 * * 0',
    $$
    WITH archived AS (
        DELETE FROM orders
        WHERE created_at < NOW() - INTERVAL '1 year'
        RETURNING *
    )
    INSERT INTO orders_archive SELECT * FROM archived;
    $$
);

Delete old logs daily:

-- Delete logs older than 30 days every day at 4 AM
SELECT cron.schedule(
    'cleanup-old-logs',
    '0 4 * * *',
    'DELETE FROM application_logs WHERE created_at < NOW() - INTERVAL ''30 days'''
);

4. Generate Reports

Daily sales report:

-- Create reports table
CREATE TABLE daily_reports (
    id SERIAL PRIMARY KEY,
    report_date DATE,
    total_orders INT,
    total_revenue DECIMAL(10,2),
    avg_order_value DECIMAL(10,2),
    created_at TIMESTAMP DEFAULT NOW()
);

-- Generate report daily at 5 AM
SELECT cron.schedule(
    'daily-sales-report',
    '0 5 * * *',
    $$
    INSERT INTO daily_reports (report_date, total_orders, total_revenue, avg_order_value)
    SELECT 
        CURRENT_DATE - INTERVAL '1 day' as report_date,
        COUNT(*) as total_orders,
        SUM(total) as total_revenue,
        AVG(total) as avg_order_value
    FROM orders
    WHERE DATE(created_at) = CURRENT_DATE - INTERVAL '1 day';
    $$
);

5. Call Stored Procedures

Execute complex logic on schedule:

-- Create procedure
CREATE OR REPLACE PROCEDURE process_pending_payments()
LANGUAGE plpgsql
AS $$
BEGIN
    -- Update payment statuses
    UPDATE payments 
    SET status = 'expired'
    WHERE status = 'pending' 
    AND created_at < NOW() - INTERVAL '24 hours';
    
    -- Send notifications (via external queue)
    INSERT INTO notification_queue (type, user_id, message)
    SELECT 'payment_expired', user_id, 'Your payment has expired'
    FROM payments 
    WHERE status = 'expired' 
    AND notified_at IS NULL;
    
    -- Mark as notified
    UPDATE payments 
    SET notified_at = NOW()
    WHERE status = 'expired' 
    AND notified_at IS NULL;
END;
$$;

-- Run every 15 minutes
SELECT cron.schedule(
    'process-payments',
    '*/15 * * * *',
    'CALL process_pending_payments()'
);

6. Database Replication Checks

Monitor replication lag every 5 minutes:

CREATE TABLE replication_monitoring (
    id SERIAL PRIMARY KEY,
    lag_bytes BIGINT,
    lag_seconds INT,
    checked_at TIMESTAMP DEFAULT NOW()
);

-- Monitor replication every 5 minutes
SELECT cron.schedule(
    'monitor-replication',
    '*/5 * * * *',
    $$
    INSERT INTO replication_monitoring (lag_bytes, lag_seconds)
    SELECT 
        COALESCE(pg_wal_lsn_diff(pg_current_wal_lsn(), replay_lsn), 0) as lag_bytes,
        COALESCE(EXTRACT(EPOCH FROM (NOW() - pg_last_xact_replay_timestamp()))::INT, 0) as lag_seconds
    FROM pg_stat_replication;
    $$
);

Managing pg_cron Jobs

List All Jobs

SELECT 
    jobid,
    schedule,
    command,
    nodename,
    nodeport,
    database,
    username,
    active
FROM cron.job
ORDER BY jobid;

View Job Execution History

-- Recent job runs
SELECT 
    jobid,
    runid,
    job_pid,
    database,
    username,
    command,
    status,
    return_message,
    start_time,
    end_time
FROM cron.job_run_details
ORDER BY start_time DESC
LIMIT 20;

Check Job Success Rate

-- Success rate per job
SELECT 
    j.jobid,
    j.schedule,
    j.command,
    COUNT(*) as total_runs,
    SUM(CASE WHEN r.status = 'succeeded' THEN 1 ELSE 0 END) as successes,
    SUM(CASE WHEN r.status = 'failed' THEN 1 ELSE 0 END) as failures,
    ROUND(100.0 * SUM(CASE WHEN r.status = 'succeeded' THEN 1 ELSE 0 END) / COUNT(*), 2) as success_rate
FROM cron.job j
LEFT JOIN cron.job_run_details r ON j.jobid = r.jobid
WHERE r.start_time > NOW() - INTERVAL '7 days'
GROUP BY j.jobid, j.schedule, j.command
ORDER BY success_rate ASC;

Find Failed Jobs

SELECT 
    j.jobid,
    j.command,
    r.status,
    r.return_message,
    r.start_time
FROM cron.job j
JOIN cron.job_run_details r ON j.jobid = r.jobid
WHERE r.status = 'failed'
AND r.start_time > NOW() - INTERVAL '24 hours'
ORDER BY r.start_time DESC;

Disable/Enable Jobs

-- Disable a job
UPDATE cron.job SET active = FALSE WHERE jobid = 1;

-- Enable a job
UPDATE cron.job SET active = TRUE WHERE jobid = 1;

Update Job Schedule

-- Change schedule for existing job
SELECT cron.schedule(
    'existing-job-name',
    '0 */6 * * *',  -- New schedule
    'SELECT process_data()'
);

Delete a Job

-- By job name
SELECT cron.unschedule('job-name');

-- By job ID
SELECT cron.unschedule(1);

Advanced Patterns

Run Job in Specific Database

By default, jobs run in the database specified in cron.database_name. To run in a different database:

-- Schedule job in specific database
SELECT cron.schedule_in_database(
    'cleanup-analytics',          -- Job name
    '0 3 * * *',                  -- Schedule
    'DELETE FROM events WHERE created_at < NOW() - INTERVAL ''90 days''',
    'analytics_db'                -- Target database
);

Conditional Job Execution

-- Only run if condition is met
SELECT cron.schedule(
    'conditional-job',
    '*/10 * * * *',
    $$
    DO $$
    BEGIN
        IF (SELECT COUNT(*) FROM pending_queue) > 1000 THEN
            PERFORM process_queue();
        END IF;
    END $$;
    $$
);

Chain Multiple Operations

SELECT cron.schedule(
    'nightly-maintenance',
    '0 2 * * *',
    $$
    -- Step 1: Archive old data
    WITH archived AS (
        DELETE FROM orders 
        WHERE created_at < NOW() - INTERVAL '1 year'
        RETURNING *
    )
    INSERT INTO orders_archive SELECT * FROM archived;
    
    -- Step 2: Vacuum tables
    VACUUM ANALYZE orders;
    VACUUM ANALYZE orders_archive;
    
    -- Step 3: Update statistics
    INSERT INTO maintenance_log (task, rows_archived, completed_at)
    SELECT 
        'nightly-archive',
        (SELECT COUNT(*) FROM orders_archive WHERE archived_at::DATE = CURRENT_DATE),
        NOW();
    
    -- Step 4: Refresh materialized views
    REFRESH MATERIALIZED VIEW CONCURRENTLY monthly_sales;
    $$
);

Monitoring and Alerting

Create Monitoring Table

CREATE TABLE cron_monitoring (
    id SERIAL PRIMARY KEY,
    job_name TEXT,
    expected_duration_seconds INT,
    actual_duration_seconds INT,
    status TEXT,
    error_message TEXT,
    checked_at TIMESTAMP DEFAULT NOW()
);

-- Monitor job execution
SELECT cron.schedule(
    'monitor-cron-jobs',
    '*/5 * * * *',
    $$
    INSERT INTO cron_monitoring (job_name, actual_duration_seconds, status, error_message)
    SELECT 
        j.command as job_name,
        EXTRACT(EPOCH FROM (r.end_time - r.start_time))::INT as duration,
        r.status,
        r.return_message
    FROM cron.job j
    JOIN cron.job_run_details r ON j.jobid = r.jobid
    WHERE r.end_time > NOW() - INTERVAL '5 minutes'
    AND r.end_time IS NOT NULL;
    $$
);

Alert on Failures

-- Create alert queue
CREATE TABLE alert_queue (
    id SERIAL PRIMARY KEY,
    alert_type TEXT,
    message TEXT,
    created_at TIMESTAMP DEFAULT NOW(),
    sent BOOLEAN DEFAULT FALSE
);

-- Check for failed jobs and create alerts
SELECT cron.schedule(
    'check-job-failures',
    '*/10 * * * *',
    $$
    INSERT INTO alert_queue (alert_type, message)
    SELECT 
        'cron_failure',
        'Job failed: ' || j.command || ' - Error: ' || r.return_message
    FROM cron.job j
    JOIN cron.job_run_details r ON j.jobid = r.jobid
    WHERE r.status = 'failed'
    AND r.start_time > NOW() - INTERVAL '10 minutes'
    AND NOT EXISTS (
        SELECT 1 FROM alert_queue a
        WHERE a.message LIKE '%' || j.command || '%'
        AND a.created_at > NOW() - INTERVAL '1 hour'
    );
    $$
);

Production Best Practices

1. Use Timeouts

Prevent runaway queries:

-- Set statement timeout for cron jobs
ALTER DATABASE postgres SET cron.statement_timeout = '5min';

-- Or per-job
SELECT cron.schedule(
    'long-running-task',
    '0 3 * * *',
    $$
    SET statement_timeout = '30min';
    CALL expensive_procedure();
    $$
);

2. Handle Concurrency

Prevent overlapping executions:

-- Use advisory locks
SELECT cron.schedule(
    'exclusive-task',
    '*/5 * * * *',
    $$
    DO $$
    DECLARE
        lock_obtained BOOLEAN;
    BEGIN
        -- Try to get lock (don't wait)
        SELECT pg_try_advisory_lock(12345) INTO lock_obtained;
        
        IF lock_obtained THEN
            BEGIN
                -- Do work
                PERFORM heavy_processing();
                
                -- Release lock
                PERFORM pg_advisory_unlock(12345);
            EXCEPTION WHEN OTHERS THEN
                PERFORM pg_advisory_unlock(12345);
                RAISE;
            END;
        ELSE
            RAISE NOTICE 'Previous execution still running, skipping...';
        END IF;
    END $$;
    $$
);

3. Log Everything

CREATE TABLE cron_execution_log (
    id SERIAL PRIMARY KEY,
    job_name TEXT,
    started_at TIMESTAMP,
    completed_at TIMESTAMP,
    rows_affected INT,
    status TEXT,
    error_message TEXT
);

-- Logged execution
SELECT cron.schedule(
    'logged-task',
    '0 * * * *',
    $$
    DO $$
    DECLARE
        v_start TIMESTAMP;
        v_rows INT;
    BEGIN
        v_start := NOW();
        
        DELETE FROM old_data WHERE created_at < NOW() - INTERVAL '30 days';
        GET DIAGNOSTICS v_rows = ROW_COUNT;
        
        INSERT INTO cron_execution_log 
        (job_name, started_at, completed_at, rows_affected, status)
        VALUES 
        ('cleanup-old-data', v_start, NOW(), v_rows, 'success');
        
    EXCEPTION WHEN OTHERS THEN
        INSERT INTO cron_execution_log 
        (job_name, started_at, completed_at, status, error_message)
        VALUES 
        ('cleanup-old-data', v_start, NOW(), 'failed', SQLERRM);
        RAISE;
    END $$;
    $$
);

4. Test Before Scheduling

-- Test query first
SELECT process_pending_orders();

-- If it works, schedule it
SELECT cron.schedule(
    'process-orders',
    '*/15 * * * *',
    'SELECT process_pending_orders()'
);

Troubleshooting

Job Not Running

Check if job exists:

SELECT * FROM cron.job WHERE schedule LIKE '%your schedule%';

Check if job is active:

SELECT active FROM cron.job WHERE jobid = 1;

Check execution history:

SELECT * FROM cron.job_run_details 
WHERE jobid = 1 
ORDER BY start_time DESC 
LIMIT 10;

Permission Errors

-- Grant necessary permissions
GRANT USAGE ON SCHEMA cron TO your_user;
GRANT ALL ON cron.job TO your_user;

Jobs Running as Wrong User

-- Jobs run as the user who scheduled them
SELECT username FROM cron.job WHERE jobid = 1;

-- Reschedule as correct user
SELECT cron.unschedule(1);
-- Log in as correct user, then schedule again

Comparison: pg_cron vs System Cron

| Feature | pg_cron | System Cron | |---------|---------|-------------| | SQL Queries | ✅ Native | ❌ Need psql wrapper | | Database Auth | ✅ Automatic | ❌ Need .pgpass | | Logging | ✅ Built-in tables | ❌ Manual setup | | Management | ✅ SQL commands | ❌ crontab -e | | Transactions | ✅ Proper handling | ⚠️ Can be tricky | | Non-DB Tasks | ❌ Can't run | ✅ Any command | | Monitoring | ✅ Query job_run_details | ❌ Check log files | | Version Control | ✅ SQL migrations | ⚠️ Config files |

Use pg_cron when:

  • Task is database-related (99% of cases)
  • You want simple scheduling
  • You need transaction safety
  • You want queryable logs

Use system cron when:

  • Task involves non-database operations
  • Need to run system commands
  • Scheduling across multiple databases
  • Need second-level precision

Conclusion

pg_cron brings cron scheduling into PostgreSQL, making database automation simple and reliable.

What you can automate:

  • ✅ VACUUM and ANALYZE
  • ✅ Materialized view refreshes
  • ✅ Data archival and cleanup
  • ✅ Report generation
  • ✅ Stored procedure execution
  • ✅ Replication monitoring

Why it's better:

  • No external scripts
  • Database handles authentication
  • Queryable execution logs
  • Manage everything from SQL
  • Proper transaction handling

Your next steps:

  1. Install pg_cron extension
  2. Schedule your first job
  3. Monitor execution via cron.job_run_details
  4. Replace external cron scripts with pg_cron

Ready to schedule database tasks? Start with pg_cron, then build schedules with our Cron Expression Generator!


Related Articles

Database Automation:

Cron Fundamentals:


Keywords: pg_cron, postgresql cron jobs, postgres cron extension, postgresql scheduling, postgres automation, postgresql task scheduler, pg_cron tutorial, postgresql cron examples