Postgres folks:

I have a large (1-2TB) analytics database that gets irregular, but at least daily, major data updates that generate GBs of new data and GBs of WAL, while saturating the CPU of the main server.

We'd like to create a regular copy (at least once a week, if not a replica) of this DB on a test server which is in another data center. How would you minimize network transport for this? ...

... options considered:

- compressed pg_basebackup
- rsync w/ start/stop backup
- streaming replication with wal_compression

@l_avrot @fuzzychef we did exactly this with pgBackrest. Roughly 20% changeset per day. Used pgBackrest to rebuild the test servers routinely as part of our regular processes. pgBackrest was really the only tool capable. We did use pgbasebackup for realtime replica rebuilds, but that has different tradeoffs.

@l_avrot @fuzzychef also, strange question: is put a bigger NIC in an option? Compression is cool, but consumes alot of CPU relative to a bigger pipe.

@intrbiz @l_avrot it's not, and regardless the bandwitdh between the two data centers is $$$metered

Follow

@fuzzychef @l_avrot ahh, in which case pgBackrest with compression or a dark fibre.

Sign in to participate in the conversation
Mastodon

Time for a cuppa... Earl Grey please!