Qovery / Replibyte

Seed your development database with real data ⚡️
https://www.replibyte.com
GNU General Public License v3.0
4.17k stars 129 forks source link

Process killed after some time #184

Open naudiyal30 opened 2 years ago

naudiyal30 commented 2 years ago

Process killed after some time while taking the backup (11 GB in total) into the GCS bucket but when I try this on small database (less than 20 MB) for testing, it works like charm. Can you please look into this?

Command I have used for taking the backup : replibyte -c replibyte.yaml dump create

Also, need to know that how much time it takes to create a dump of 11 GB data?

deedubs commented 2 years ago

Replibyte streams the contents of your database to your local machine, runs the defined transforms and then uploads the result to the specified datastore.

Therefore there are many factors that could impact the time it takes to backup your database:

evoxmusic commented 2 years ago

Hi @naudiyal30 @deedubs can you confirm it's all good with the #203 fix?

deedubs commented 2 years ago

My issues were with mongodb. Cursory glance it doesn't look like that PR will have a material affect

sondrelg commented 2 years ago

I'm trying to create a dump that's larger than 40GB compressed, and running out of memory. Do I have other options than streaming the dump over stdin?

EDIT: I'll stream directly from postgres and subset as a part of that process. That seems like it will do it :crossed_fingers:

mrmonkington commented 1 year ago

I too have noticed that the dump process is memory intensive when reading an input from stdin.

I have a 10GB mysql dump, compressed to ~ 1GB.

When transforming the dump with replibyte, the process was OOM killed after processing ~ 5GB (half) of the input, consuming approx 60GB of memory at this point.

My transform was fairly simple, a few skipped tables and some field transforms (subsetting is not supported for mysql anyway).

If this is an unavoidable situation, then I think at the very least the documentation should indicate the limitations of using an existing dump file, and that "option 1" is recommended for large DBs.

sondrelg commented 1 year ago

Streaming from postgres is actually the same, plus a few steps. It just starts the process by creating a dump iirc.

mrmonkington commented 1 year ago

@sondrelg ah yes, as I have just discovered :-/

lukeasrodgers commented 1 year ago

I am getting this error (process exits with code 137) using replibyte's built-in dump functionality with postgres, writing to the local filesystem. The source database i'm testing on is several hundred gigs, ~100 tables, one or two ~100M row tables. Machine has 32G RAM, 800G disk disk.

@sondrelg did you get this working? after the above attempt, I then tried cat dumps/backup.sql | replibyte -c conf.yaml dump create using a dump produced with pg_dump and it ran for about an hour before again crashing with exit code 137. It wrote 43 .dump files, totaling about 291 megs before exiting.

dmesg confirms that in both cases, it was the OOM killer at work

[Wed Jan 25 17:52:15 2023] Out of memory: Killed process 7217 (replibyte) total-vm:40640984kB, anon-rss:32486260kB, file-rss:0kB, shmem-rss:0kB, UID:1000 pgtables:63808kB oom_score_adj:0
[Wed Jan 25 21:58:06 2023] Out of memory: Killed process 4873 (replibyte) total-vm:40418304kB, anon-rss:32355224kB, file-rss:0kB, shmem-rss:0kB, UID:1000 pgtables:63548kB oom_score_adj:0
sondrelg commented 1 year ago

No, never got it working, unfortunately :slightly_smiling_face:

evoxmusic commented 1 year ago

Hi 👋🏽 , FYI I am working on a couple of improvements to reduce memory usage and improve the overall performance. I keep you posted. The PR in progress is #257