backup-restore
provisioning

Backup & Restore

PostgreSQL backups use CloudNativePG's native barmanObjectStore integration — continuous WAL archiving + scheduled base backups to S3.

How It Works

PostgreSQL Pod
  → WAL archiving (every 5 min) → S3 bucket
  → Scheduled base backup (cron) → S3 bucket
  • WAL archiving enables point-in-time recovery to any second
  • Base backups provide full snapshots for faster restore
  • Storage: LocalStack (POC) or AWS S3 (production)

Enable Backups at Provision Time

Include backup configuration in the provisioning request:

curl -X POST http://localhost:24005/api/provision \
  -H "Content-Type: application/json" \
  -d '{
    "projectName": "my-db",
    "orgId": "myorg",
    "databaseType": "POSTGRESQL",
    "tier": "STANDARD",
    "backup": {
      "enabled": true,
      "schedule": "0 2 * * *",
      "retention": 30
    }
  }'

This creates:

  1. S3 credentials secret in the namespace
  2. CNPG Cluster CRD with barmanObjectStore config
  3. ScheduledBackup CRD with the cron schedule

Manual Backup

curl -X POST http://localhost:24005/api/provision/my-db/backup/trigger
{
  "id": "my-db-backup-20260326-142040",
  "projectId": "my-db",
  "type": "MANUAL",
  "status": "IN_PROGRESS"
}

Status automatically updates to COMPLETED or FAILED by checking the K8s Backup CRD status.

List Backups

curl http://localhost:24005/api/provision/my-db/backup/list
{
  "backups": [
    {
      "id": "my-db-backup-20260326-142040",
      "status": "COMPLETED",
      "type": "MANUAL",
      "timestamp": "2026-03-26T14:20:40Z"
    }
  ],
  "backupEnabled": true,
  "schedule": "0 2 * * *",
  "retentionDays": 30
}

Restore to a New Cluster

Restores always create a new cluster — they never overwrite the source.

curl -X POST http://localhost:24005/api/provision/my-db/backup/restore \
  -H "Content-Type: application/json" \
  -d '{
    "newProjectId": "my-db-restored"
  }'

This creates a new CNPG Cluster with bootstrap.recovery pointing to the source's S3 backup path. WAL replay brings it to the latest available state.

Point-in-Time Recovery (PITR)

Restore to an exact timestamp:

curl -X POST http://localhost:24005/api/provision/my-db/backup/restore \
  -H "Content-Type: application/json" \
  -d '{
    "newProjectId": "my-db-pitr",
    "targetTime": "2026-03-26T12:00:00Z"
  }'

PITR replays WAL segments and stops at the exact targetTime. The new cluster will contain all data up to that second — no more, no less.

S3 Storage Setup

LocalStack (local development)

kubectl create namespace localstack
# Deploy LocalStack (see Quick Start)

# Create bucket
kubectl exec -n localstack deployment/localstack -- \
  sh -c "AWS_ACCESS_KEY_ID=test AWS_SECRET_ACCESS_KEY=test \
  aws --endpoint-url=http://localhost:4566 s3 mb s3://postgres-backups"

AWS S3 (production)

Replace the S3 credentials secret with real AWS credentials:

kubectl create secret generic backup-s3-creds \
  -n {namespace} \
  --from-literal=ACCESS_KEY_ID={your-key} \
  --from-literal=ACCESS_SECRET_KEY={your-secret}

Update the CNPG Cluster CRD endpointURL to point to your AWS region endpoint (or remove it to use the default).