TLDR: https://github.com/xarg/pghoard-k8s
This is a small tutorial on how to do incremental backups using pghoard for your PostgreSQL (I assume you’re running everything in Kubernetes). This is intended to help people to get started faster and not waste time finding the right dependencies, etc..
pghoard is a PostgreSQL backup daemon that incrementally backups your files on a object storage (S3, Google Cloud Storage, etc..). For this tutorial what we’re trying to achieve is to upload our PostgreSQL to S3.
First, let’s create our docker image (we’re using the alpine:3.4 image cause it’s small):
FROM alpine:3.4 |
REPLICA_USER
and REPLICA_PASSWORD
env vars will be replaced later in your Kubernetes conf by whatever your config is in production, I use those values to test locally using docker-compose.
The config pghoard.json
which tells where to get your data from and where to upload it and how:
{ |
Obviously replace the values above with your own. And read pghoard docs for more config explanation.
Note: Make sure you have enough space in your /data
; use a Google Persistent Volume if you DB is very big.
Launch script which does 2 things:
- Replaces our ENV variables with the right username and password for our replication (make sure you have enough connections for your replica user)
- Launches the pghoard daemon.
#!/usr/bin/env bash |
Once you build and upload your image to gcr.io you’ll need a replication controller to start your pghoard daemon pod:
apiVersion: v1 |
The reason I use a replication controller is because I want the pod to restart if it fails, if a simple pod is used it will stay dead and you’ll not have backups.
Future to do:
- Monitoring (are you backups actually done? if not, do you receive a notification?)
- Stats collection.
- Encryption of backups locally and then uploaded to the cloud (this is supported by pghoard).
Hope it helps, stay safe and sleep well at night.
Again, repo with the above: https://github.com/xarg/pghoard-k8s