Docker PostgreSQL Backups: Production-Ready Strategy
A comprehensive guide to backing up PostgreSQL databases in Docker containers. Covers container identification, database discovery, compression, checksums, manifests, and retention policies with production-ready scripts.
I manage PostgreSQL databases in Docker across multiple setups—single instances, primary/replica pairs, and custom deployments. Every single one needs reliable backups, and I quickly realized that a one-size-fits-all approach doesn't work. Different container names, varying database configurations, and infrastructure requirements meant I needed a container-agnostic backup system that scales from personal projects to production environments.
This guide covers the complete backup strategy I've built and refined over the past year. It's production-ready, thoroughly tested, and designed to work with any PostgreSQL container regardless of naming or setup.
Why This Matters
Data loss in production is catastrophic. Whether from accidental deletion, hardware failure, or ransomware, you need recent, verified, and easily restoreable backups. PostgreSQL in Docker adds complexity—databases live in containers that can be destroyed, migrated, or replaced. Your backup strategy must handle this volatility.
What you get with this approach:
- Container-agnostic: Works with
postgres,my-postgres,pg-primary, or any container name - Integrity verified: SHA256 checksums for all backup files
- Metadata tracked: JSON manifest with container info, PostgreSQL version, and file inventory
- Retention managed: Automatic cleanup of old backups
- Compression optimized: Multi-layer compression for minimal storage
- Production-ready: Complete error handling and logging
Quick Start: Your First Backup (3 Steps)
If you just want a working backup right now, follow these three steps. Full explanations come later.
Step 1: Identify your PostgreSQL container
docker ps --format 'table {{.Names}}\t{{.Image}}'
# Find the container running PostgreSQL (e.g., my-postgres)Step 2: Verify database discovery
# Replace <container-name> with your actual container name
docker exec <container-name> psql -U postgres -d postgres -t -c \
"SELECT datname FROM pg_database WHERE datistemplate = false AND datname NOT IN ('postgres');"Step 3: Create your first backup
# Create backup directory for today
mkdir -p /var/backups/postgresql/$(date +%Y-%m-%d)
# Backup a single database (replace <container-name> and mydb)
docker exec <container-name> pg_dump -U postgres -Fc -f /tmp/backup.sql mydb
docker cp <container-name>:/tmp/backup.sql - | gzip > /var/backups/postgresql/$(date +%Y-%m-%d)/<container-name>_mydb_$(date +%Y-%m-%dT%H%M%S).sql.gz
docker exec <container-name> rm -f /tmp/backup.sql
echo "Backup created successfully!"That's it. You now have your first backup. For production setups with automation, retention policies, and comprehensive verification, continue reading.
Prerequisites
Before starting, ensure:
- Docker is installed and running
- PostgreSQL container is running and accessible
- You know the PostgreSQL container name (find with
docker ps) - You have write permissions to
/var/backups/(create if needed:sudo mkdir -p /var/backups/postgresql) - At least 1GB free disk space per database
sha256sumutility available (standard on Linux)
Understanding the Backup Output Structure
All backups organize into date-based directories with checksums and metadata:
/var/backups/postgresql/
└── 2026-03-31/
├── manifest.json
├── manifest.json.sha256
├── my-postgres_masterdb_2026-03-31T143022.sql.gz
├── my-postgres_masterdb_2026-03-31T143022.sql.gz.sha256
├── my-postgres_appdb_2026-03-31T143022.sql.gz
└── my-postgres_appdb_2026-03-31T143022.sql.gz.sha256
Filename format: {container_name}_{dbname}_{YYYY-MM-DDTHHMM01}.sql.gz
The container name is included so you can identify which PostgreSQL instance the backup came from—critical when managing multiple database servers.
Step 1: Container Identification
First, discover which PostgreSQL container you need to back up.
# List all running containers with images
docker ps --format 'table {{.Names}}\t{{.Image}}\t{{.Status}}'Look for containers running postgres, postgresql, or custom PostgreSQL images. Note the container name (first column)—this is what you'll pass to all backup operations.
Common container names you might see:
postgres- Simple single instancemy-postgres- Named instancepg-primary- Primary in HA setuppg-replica- Replica in HA setuppostgres-db- Docker-compose generatedmyproject_postgres_1- Docker-compose service- Any other name you assigned
For docker-compose setups:
# View running services and their container names
docker-compose psExample output:
NAME IMAGE
postgres postgres:17-alpine
postgres-replica postgres:16-bookworm
backup-db custom-postgres:15
Keep track of your container name—you'll use it in every backup operation.
Step 2: Database Discovery
Extract the list of user databases from your PostgreSQL container.
# Get all user databases (excludes templates and system databases)
get_postgres_databases() {
local container="${1:-postgres}"
local db_user="${2:-postgres}"
docker exec "$container" psql -U "$db_user" -d postgres -t -c \
"SELECT datname FROM pg_database
WHERE datistemplate = false
AND datname NOT IN ('postgres');" \
2>/dev/null | tr -d ' ' | grep -v '^$'
}
# Get database sizes for monitoring and planning
get_database_sizes() {
local container="${1:-postgres}"
local db_user="${2:-postgres}"
docker exec "$container" psql -U "$db_user" -d postgres -t -c \
"SELECT datname, pg_size_pretty(pg_database_size(datname))
FROM pg_database WHERE datistemplate = false;" \
2>/dev/null
}
# Usage - replace my-postgres with your container name
get_postgres_databases my-postgres postgres
# Output:
# masterdb
# appdb
# analytics
get_database_sizes my-postgres postgres
# Output:
# masterdb 45 MB
# appdb 120 MB
# analytics 2.3 GBUnderstanding the query:
datistemplate = false- Excludes template databases (PostgreSQL system databases)datname NOT IN ('postgres')- Excludes the default postgres database- This gives you all user-created databases that need backing up
Save this list—you'll use it to determine which databases to back up.
Step 3: Database-Level Backup with pg_dump
Backup individual PostgreSQL databases using the custom format. This is the core of your backup strategy.
# Backup a single database
backup_database() {
local db_name="$1"
local backup_dir="$2"
local timestamp="$3"
local container="$4"
local db_user="${5:-postgres}"
local backup_file="${backup_dir}/${container}_${db_name}_${timestamp}.sql.gz"
# Create temporary backup inside container
local container_backup="/tmp/${db_name}_backup.sql"
# Use pg_dump with custom format (-Fc) for compression and selective restore
if docker exec "$container" pg_dump -U "$db_user" -Fc -f "$container_backup" "$db_name" 2>/dev/null; then
# Stream from container with gzip compression
docker cp "$container:$container_backup" - | gzip > "$backup_file"
docker exec "$container" rm -f "$container_backup"
# Verify backup file is not empty
if [[ -s "$backup_file" ]]; then
# Generate SHA256 checksum
sha256sum "$backup_file" > "${backup_file}.sha256"
local size=$(du -h "$backup_file" | cut -f1)
echo "[OK] Database backed up: $db_name ($size)"
return 0
else
echo "[ERROR] Backup file is empty: $backup_file"
rm -f "$backup_file"
return 1
fi
else
echo "[ERROR] pg_dump failed for database: $db_name"
return 1
fi
}
# Usage - replace my-postgres with your container name
backup_database masterdb /var/backups/postgresql/2026-03-31 2026-03-31T143022 my-postgres postgres
# Output: [OK] Database backed up: masterdb (45 MB)pg_dump options explained:
-U postgres- Connect as postgres user (use your database user if different)-Fc- Custom format, provides compression and allows selective restoration-f /tmp/backup.sql- Output file path inside containermydb- Database name to backup
Why multiple layers of compression?
- pg_dump -Fc (custom format) - PostgreSQL's binary compression
- docker cp wraps in TAR - Container copy format
- gzip wraps the TAR - Additional compression
Result: Very small backup files that restore efficiently.
Step 4: Generate Manifest with Metadata
Create a JSON manifest tracking all backups with container and version information.
# Generate manifest.json with backup metadata
generate_manifest() {
local backup_dir="$1"
local timestamp="$2"
local container="$3"
local db_user="${4:-postgres}"
local db_name="${5:-postgres}"
local manifest_file="${backup_dir}/manifest.json"
# Get PostgreSQL version
local pg_version=$(docker exec "$container" psql -U "$db_user" -d "$db_name" -t -c \
"SELECT version();" 2>/dev/null | head -1 || echo "unknown")
# Create manifest JSON
cat > "$manifest_file" << EOF
{
"timestamp": "${timestamp}",
"hostname": "$(hostname)",
"container": "${container}",
"postgresql_version": "${pg_version}",
"backup_type": "postgresql_only",
"files": [
EOF
# Add all backup files to manifest
local first=true
for file in "$backup_dir"/*.sql.gz; do
[[ -f "$file" ]] || continue
local filename=$(basename "$file")
local checksum=$(sha256sum "$file" 2>/dev/null | cut -d' ' -f1)
if [[ "$first" == "true" ]]; then
first=false
else
echo "," >> "$manifest_file"
fi
printf ' {"name": "%s", "sha256": "%s"}' "$filename" "$checksum" >> "$manifest_file"
done
cat >> "$manifest_file" << EOF
],
"backup_info": {
"retention_days": 2
}
}
EOF
# Generate checksum for manifest itself
sha256sum "$manifest_file" > "${manifest_file}.sha256" 2>/dev/null || true
echo "[OK] Manifest created: manifest.json"
}
# Usage
generate_manifest /var/backups/postgresql/2026-03-31 2026-03-31T143022 my-postgres postgres postgresExample manifest output:
{
"timestamp": "2026-03-31T143022",
"hostname": "production-01",
"container": "my-postgres",
"postgresql_version": "PostgreSQL 17.0 on x86_64-pc-linux-gnu",
"backup_type": "postgresql_only",
"files": [
{"name": "my-postgres_masterdb_2026-03-31T143022.sql.gz", "sha256": "abc123def456..."},
{"name": "my-postgres_appdb_2026-03-31T143022.sql.gz", "sha256": "def456abc123..."}
],
"backup_info": {
"retention_days": 2
}
}Why the manifest matters:
- Documents which PostgreSQL version was backed up
- Provides timestamp and hostname for audit trails
- Lists all files with checksums for verification
- Container-agnostic—works with any PostgreSQL container name
Step 5: Verify Backup Integrity
Verify all backups using SHA256 checksums to ensure files weren't corrupted.
# Verify all checksums in a backup directory
verify_backup_checksums() {
local backup_dir="$1"
local errors=0
echo "[INFO] Verifying checksums in: $backup_dir"
for checksum_file in "$backup_dir"/*.sha256; do
[[ -f "$checksum_file" ]] || continue
# Skip manifest checksum (verify separately)
[[ "$(basename "$checksum_file")" == "manifest.json.sha256" ]] && continue
local data_file="${checksum_file%.sha256}"
if [[ -f "$data_file" ]]; then
if sha256sum --check "$checksum_file" > /dev/null 2>&1; then
echo " [OK] $(basename "$data_file")"
else
echo " [FAILED] $(basename "$data_file")"
errors=$((errors + 1))
fi
else
echo " ✗ MISSING: $(basename "$data_file")"
errors=$((errors + 1))
fi
done
if [[ $errors -eq 0 ]]; then
echo "[INFO] All checksums verified successfully"
return 0
else
echo "[ERROR] Verification failed: $errors errors"
return 1
fi
}
# Usage
verify_backup_checksums /var/backups/postgresql/2026-03-31Step 6: Retention Policy
Automatically clean up old backups based on age to prevent disk space issues.
# Remove backups older than N days
cleanup_old_backups() {
local backup_root="${1:-/var/backups/postgresql}"
local retention_days="${2:-2}"
echo "[INFO] Cleaning up backups older than $retention_days days"
# Find and remove old backup directories
find "$backup_root" -maxdepth 1 -type d -name "????-??-??" \
-mtime +"$retention_days" \
-exec rm -rf {} \; 2>/dev/null || true
local remaining=$(find "$backup_root" -maxdepth 1 -type d -name "????-??-??" 2>/dev/null | wc -l)
echo "[INFO] Retention: kept last $retention_days days ($remaining backup directories)"
}
# Usage - keep last 7 days of backups
cleanup_old_backups /var/backups/postgresql 7Retention Strategy:
- 2 days (default): Recover from recent mistakes or data corruption
- 7 days: Recover from accidental deletions
- 30 days: Investigate older data patterns
- 365 days+: Compliance and archival
Adjust the retention period based on your data criticality and storage capacity.
Step 7: List Available Backups
Browse available backups with verification status.
# List all backups with verification status
list_backups() {
local backup_root="${1:-/var/backups/postgresql}"
echo "[INFO] Available backups in $backup_root:"
if [[ ! -d "$backup_root" ]] || [[ -z "$(ls -A "$backup_root" 2>/dev/null)" ]]; then
echo " No backups found"
return
fi
for backup_dir in "$backup_root"/*/; do
[[ -d "$backup_dir" ]] || continue
local dir_name=$(basename "$backup_dir")
local file_count=$(find "$backup_dir" -maxdepth 1 -type f -name "*.sql.gz" | wc -l)
local total_size=$(du -sh "$backup_dir" 2>/dev/null | cut -f1)
# Check if verified
local verified="[NO]"
[[ -f "$backup_dir/manifest.json.sha256" ]] && verified="[YES]"
echo " $dir_name ($file_count files, $total_size) [verified: $verified]"
done
}
# Usage
list_backups /var/backups/postgresqlThe Complete Production-Ready Backup Script
Here's a comprehensive, production-ready backup script that integrates everything above. This script is container-agnostic and works with any PostgreSQL container name.
#!/usr/bin/env bash
# scripts/backup/backup.sh
#
# Comprehensive backup system for PostgreSQL containers
# Auto-detects PostgreSQL containers and backs up all databases
# Works with any container name and setup (single, primary/replica, HA, etc.)
#
# Usage:
# ./backup.sh <container-name> # Run full backup for specific container
# ./backup.sh <container-name> --dry-run # Show what would be done
# ./backup.sh <container-name> --verify-only # Verify existing backup checksums
# ./backup.sh <container-name> --list # List available backups
#
# Output: /var/backups/postgresql/YYYY-MM-DD/
set -euo pipefail
# =============================================================================
# Configuration (defaults, can be overridden via environment)
# =============================================================================
BACKUP_ROOT="/var/backups/postgresql"
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")/../.." && pwd)"
LOG_DIR="${SCRIPT_DIR}/logs"
RETENTION_DAYS="${BACKUP_RETENTION_DAYS:-2}"
# Database credentials (defaults, override with env vars)
DB_USER="${PG_USER:-postgres}"
DB_NAME="${PG_DEFAULT_DB:-postgres}"
# =============================================================================
# Helper Functions
# =============================================================================
log() {
local level="$1"
local message="$2"
echo "[$(date '+%Y-%m-%d %H:%M:%S')] [$level] $message"
}
log_info() {
log "INFO" "$1"
}
log_warn() {
log "WARN" "$1"
}
log_error() {
log "ERROR" "$1" >&2
}
log_debug() {
if [[ "${DEBUG:-0}" == "1" ]]; then
log "DEBUG" "$1"
fi
}
# Load environment from .env files
load_environment() {
local env_files=(
"${SCRIPT_DIR}/.env.prod"
"${SCRIPT_DIR}/.env"
)
for env_file in "${env_files[@]}"; do
if [[ -f "$env_file" ]]; then
log_debug "Loading environment from: $env_file"
set -a
source "$env_file"
set +a
break
fi
done
# Update defaults from environment
DB_USER="${PG_USER:-$DB_USER}"
DB_NAME="${PG_DEFAULT_DB:-$DB_NAME}"
}
# Verify PostgreSQL container is running
verify_postgres_container() {
local container="$1"
if ! docker ps --format '{{.Names}}' | grep -q "^${container}$"; then
log_error "PostgreSQL container not running: $container"
return 1
fi
# Verify psql is available in the container
if ! docker exec "$container" command -v psql &>/dev/null; then
log_error "psql not found in container: $container"
return 1
fi
return 0
}
# Get list of databases to backup
get_databases() {
local container="$1"
local db_user="${2:-$DB_USER}"
docker exec "$container" psql -U "$db_user" -d postgres -t -c \
"SELECT datname FROM pg_database WHERE datistemplate = false AND datname NOT IN ('postgres');" 2>/dev/null | \
tr -d ' ' | grep -v '^$'
}
# Backup a single PostgreSQL database using pg_dump
backup_database() {
local db_name="$1"
local backup_dir="$2"
local timestamp="$3"
local container="$4"
local db_user="${5:-$DB_USER}"
local backup_file="${backup_dir}/${container}_${db_name}_${timestamp}.sql.gz"
log_info "Backing up database: $db_name"
# Create temporary backup inside container
local container_backup="/tmp/${db_name}_backup.sql"
# Use pg_dump with custom format for better compression
if docker exec "$container" pg_dump -U "$db_user" -Fc -f "$container_backup" "$db_name" 2>/dev/null; then
# Copy from container to host with gzip compression
docker cp "$container:$container_backup" - | gzip > "$backup_file"
docker exec "$container" rm -f "$container_backup"
# Verify backup file is not empty
if [[ -s "$backup_file" ]]; then
# Calculate SHA256 checksum
sha256sum "$backup_file" > "${backup_file}.sha256"
local size
size=$(du -h "$backup_file" | cut -f1)
log_info " -> $backup_file ($size)"
return 0
else
log_error "Backup file is empty: $backup_file"
rm -f "$backup_file"
return 1
fi
else
log_error "pg_dump failed for database: $db_name"
return 1
fi
}
# Generate manifest.json
generate_manifest() {
local backup_dir="$1"
local timestamp="$2"
local container="$3"
local db_user="${4:-$DB_USER}"
local db_name="${5:-$DB_NAME}"
local manifest_file="${backup_dir}/manifest.json"
log_info "Generating manifest.json"
# Get container info
local pg_version
pg_version=$(docker exec "$container" psql -U "$db_user" -d "$db_name" -t -c "SELECT version();" 2>/dev/null | head -1 || echo "unknown")
# Start JSON
cat > "$manifest_file" << EOF
{
"timestamp": "${timestamp}",
"hostname": "$(hostname)",
"container": "${container}",
"postgresql_version": "${pg_version}",
"backup_type": "postgresql_only",
"files": [
EOF
# Add files to manifest
local first=true
for file in "$backup_dir"/*.sql.gz; do
[[ -f "$file" ]] || continue
local filename
filename=$(basename "$file")
local checksum
checksum=$(sha256sum "$file" 2>/dev/null | cut -d' ' -f1 || echo "none")
if [[ "$first" == "true" ]]; then
first=false
else
echo "," >> "$manifest_file"
fi
printf ' {"name": "%s", "sha256": "%s"}' "$filename" "$checksum" >> "$manifest_file"
done
cat >> "$manifest_file" << EOF
],
"backup_info": {
"retention_days": ${RETENTION_DAYS}
}
}
EOF
# Generate checksum for manifest
sha256sum "$manifest_file" > "${manifest_file}.sha256" 2>/dev/null || true
log_info " -> manifest.json created"
}
# Cleanup old backups (keep last RETENTION_DAYS)
cleanup_old_backups() {
log_info "Cleaning up backups older than $RETENTION_DAYS days"
# Find and remove old backup directories
find "$BACKUP_ROOT" -maxdepth 1 -type d -name "????-??-??" -mtime +"$RETENTION_DAYS" -exec rm -rf {} \; 2>/dev/null || true
local remaining
remaining=$(find "$BACKUP_ROOT" -maxdepth 1 -type d -name "????-??-??" 2>/dev/null | wc -l)
log_info "Retention: kept last $RETENTION_DAYS days ($remaining backup directories)"
}
# Verify checksums in a backup directory
verify_backup_checksums() {
local backup_dir="$1"
local errors=0
log_info "Verifying checksums in: $backup_dir"
# Check each checksum file
for checksum_file in "$backup_dir"/*.sha256; do
[[ -f "$checksum_file" ]] || continue
# Skip manifest checksum
[[ "$(basename "$checksum_file")" == "manifest.json.sha256" ]] && continue
local data_file="${checksum_file%.sha256}"
[[ -f "$data_file" ]] || {
log_error "Missing data file for: $checksum_file"
errors=$((errors + 1))
continue
}
if sha256sum --check "$checksum_file" > /dev/null 2>&1; then
log_info " OK: $(basename "$data_file")"
else
log_error " FAILED: $(basename "$data_file")"
errors=$((errors + 1))
fi
done
if [[ $errors -eq 0 ]]; then
log_info "All checksums verified successfully"
return 0
else
log_error "Verification failed: $errors errors"
return 1
fi
}
# List available backups
list_backups() {
log_info "Available backups in $BACKUP_ROOT:"
if [[ ! -d "$BACKUP_ROOT" ]] || [[ -z "$(ls -A "$BACKUP_ROOT" 2>/dev/null)" ]]; then
log_info " No backups found"
return
fi
for backup_dir in "$BACKUP_ROOT"/*/; do
[[ -d "$backup_dir" ]] || continue
local dir_name
dir_name=$(basename "$backup_dir")
# Show date and backup count
local file_count
file_count=$(find "$backup_dir" -maxdepth 1 -type f -name "*.sql.gz" | wc -l)
local total_size
total_size=$(du -sh "$backup_dir" 2>/dev/null | cut -f1)
# Check if verified
local verified="[NO]"
[[ -f "$backup_dir/manifest.json.sha256" ]] && verified="[YES]"
echo " $dir_name ($file_count files, $total_size) [verified: $verified]"
done
}
# =============================================================================
# Main Functions
# =============================================================================
run_backup() {
local container="$1"
local dry_run="${DRY_RUN:-false}"
if [[ "$dry_run" == "true" ]]; then
log_info "=== DRY RUN MODE - No changes will be made ==="
fi
# Load environment
load_environment
# Verify container is running
if ! verify_postgres_container "$container"; then
log_error "PostgreSQL container verification failed"
exit 1
fi
# Create timestamp and backup directory
local timestamp
timestamp=$(date '+%Y-%m-%dT%H%M%S')
local date_str
date_str=$(date '+%Y-%m-%d')
local backup_dir="${BACKUP_ROOT}/${date_str}"
log_info "Starting backup: $timestamp"
log_info "Container: $container"
log_info "Backup directory: $backup_dir"
if [[ "$dry_run" == "false" ]]; then
mkdir -p "$backup_dir"
chmod 755 "$backup_dir"
fi
local backup_count=0
local error_count=0
# Get list of databases
local databases
databases=$(get_databases "$container")
if [[ -z "$databases" ]]; then
log_warn "No databases found to backup (falling back to default: $DB_NAME)"
databases="$DB_NAME"
fi
# Backup each database
for db in $databases; do
[[ -n "$db" ]] || continue
if [[ "$dry_run" == "false" ]]; then
if backup_database "$db" "$backup_dir" "$timestamp" "$container"; then
backup_count=$((backup_count + 1))
else
error_count=$((error_count + 1))
fi
else
log_info "[DRY RUN] Would backup database: $db"
fi
done
# Generate manifest
if [[ "$dry_run" == "false" && $backup_count -gt 0 ]]; then
generate_manifest "$backup_dir" "$timestamp" "$container"
fi
# Cleanup old backups
if [[ "$dry_run" == "false" ]]; then
cleanup_old_backups
fi
log_info "Backup completed: $backup_count database(s) backed up, $error_count errors"
if [[ "$dry_run" == "true" ]]; then
log_info "=== DRY RUN COMPLETE ==="
fi
if [[ $error_count -gt 0 ]]; then
exit 1
fi
}
run_verify() {
local date_str
date_str=$(date '+%Y-%m-%d')
local backup_dir="${BACKUP_ROOT}/${date_str}"
if [[ ! -d "$backup_dir" ]]; then
log_error "No backup found for today: $backup_dir"
exit 1
fi
verify_backup_checksums "$backup_dir"
}
run_list() {
list_backups
}
# =============================================================================
# CLI Parsing
# =============================================================================
usage() {
cat << EOF
Usage: $(basename "$0") <container-name> [OPTIONS]
Comprehensive backup system for PostgreSQL containers
Works with any PostgreSQL container name and setup
OPTIONS:
--dry-run Show what would be done without making changes
--verify-only Verify checksums of existing backup
--list List available backups
-h, --help Show this help message
EXAMPLES:
$(basename "$0") my-postgres # Run full backup
$(basename "$0") pg-primary --dry-run # Preview backup actions
$(basename "$0") postgres --verify-only # Verify today's backup
$(basename "$0") db-prod --list # List all backups
OUTPUT:
Backup directory: /var/backups/postgresql/YYYY-MM-DD/
Files: {container_name}_{dbname}_{timestamp}.sql.gz
EOF
}
main() {
# Container name is required
if [[ $# -lt 1 ]]; then
log_error "Missing required argument: container-name"
usage
exit 1
fi
local container="$1"
shift
# Default values
local mode="backup"
export DRY_RUN="false"
# Parse arguments
while [[ $# -gt 0 ]]; do
case "$1" in
--dry-run)
export DRY_RUN="true"
shift
;;
--verify-only)
mode="verify"
shift
;;
--list)
mode="list"
shift
;;
-h|--help)
usage
exit 0
;;
*)
log_error "Unknown option: $1"
usage
exit 1
;;
esac
done
# Run in subshell to handle errors gracefully
(
case "$mode" in
backup)
run_backup "$container"
;;
verify)
run_verify
;;
list)
run_list
;;
esac
)
exit ${PIPESTATUS[0]}
}
main "$@"Make this script executable:
chmod +x backup.shUsage:
# Run full backup for your container
./backup.sh my-postgres
# Preview what would happen
./backup.sh my-postgres --dry-run
# Verify today's backup
./backup.sh my-postgres --verify-only
# List all available backups
./backup.sh my-postgres --listTroubleshooting
| Issue | Cause | Solution |
|---|---|---|
| "No containers matching" | Container name is wrong | Run docker ps to find correct name |
| "psql not found" | PostgreSQL not installed in container | Verify you're using official postgres image |
| "Permission denied" | Docker socket permissions | Add user to docker group: usermod -aG docker $USER |
| "Backup file is empty" | Database dump failed silently | Check database user credentials and permissions |
| "No databases found" | Database discovery query failed | Verify database user has query permissions |
| "Disk full" | Backup directory out of space | Check available space: df -h /var/backups/ |
Key Takeaways
- Container-agnostic design - Works with any PostgreSQL container name and setup
- Complete working scripts - Copy-paste ready for immediate use in production
- Integrity verification - SHA256 checksums catch silent data corruption
- Metadata tracking - JSON manifests document version and timing
- Automated retention - Old backups cleaned up automatically
- Production-ready - Comprehensive error handling and logging
Next Steps
- Read Part 2: Restoring Docker PostgreSQL Safely
- Read Part 3: Automating Backups with Cron
- Save the backup script: Store
backup.shin your infrastructure repo - Test the backup: Run
./backup.sh <your-container> --dry-runfirst
Your PostgreSQL databases are now protected with production-grade backups. In Part 2, we'll cover restoration procedures and emergency recovery strategies.