Open AnrDaemon opened 5 years ago
You can try to play with batch size options, and in particular reduce the prefetch rows setting, which defaults to 100000 rows (see https://pgloader.readthedocs.io/en/latest/pgloader.html#with).
But what would change the issue, if the same database were loaded just fine previously? Any server changes that may affect it?
Also, how to detect optimal setting? What should I look at?
The problem is with SBCL Garbage Collector, and I don't have proper answers for how to avoid it in general. I guess at some point we should get back to speaking with SBCL maintainers and see about improving the GC there. It sounds too much of a task for me to handle though, unfortunately.
Sad story. It placated at 25k records for now, but given your explanation, I'm compelled to make it configurable.
What heap size was SBCL started with? Does it help if you start SBCL with a bigger heap?
How can I know? I'm using provided Docker container, and not setting any extra options. However, the error text indicates a bug in GC, to me.
I see.
It is not necessarily a GC bug per se - it is possible that pgloader requires a bigger heap to function properly with this database. The provided Docker container needs to be modified to take this into account.
Normally this is achieved through running SBCL with a --dynamic-space-size
flag and specifying the new heap size. I do not know how to pass this flag into the container as I'm not a container person.
@dimitri Is there any kind of flag for the container that allows specifying the SBCL heap size? If not, there should be one.
I'm not sure if it's possible to set that to a saved image at run-time, my understanding was that it needs to happen when cooking the image somehow?
From #sbcl
on Freenode:
17:00 < phoe> I have a SAVE-LISP-AND-DIE'd SBCL image that executes an application.
Can I pass a --dynamic-size argument to that binary like I would to a
normal SBCL image?
17:03 < phoe> Or is it the case of :SAVE-RUNTIME-OPTIONS argument passed to S-L-A-D?
17:05 < jackdaniel> save-runtime-options makes the executable inherit dynamic size,
if save-runtime-options is nil, then executable will process
--dynamic-space-size argument and set appropriate values
Via the SBCL manual:
:save-runtime-options If true, values of runtime options –dynamic-space-size and –control-stack-size that were used to start sbcl are stored in the standalone executable, and restored when the executable is run. This also inhibits normal runtime option processing, causing all command line arguments to be passed to the toplevel. Meaningless if :executable is nil.
Nice! Do you want to try to hack ./src/save.lisp
so that we can pass a dynamic space size either when building the docker container, or maybe to the resulting pgloader image? I guess that with the current version of things the --dynamic-space-size
might work already, it's just not documented, right?
I guess that with the current version of things the
--dynamic-space-size
might work already, it's just not documented, right?
No, currently :SAVE-RUNTIME-OPTIONS
is set to T
:
https://github.com/dimitri/pgloader/blob/954eca02d0d27f6ce71be48d01ae4987029bca11/src/save.lisp#L83
This means that the heap size cannot be modified from outside if I understand it correctly.
So far, it seems like we either have a choice of being able to pass --dynamic-space-size
or normal pgloader argv
, but not both at the same time; one option might be starting SBCL with a predetermined heap size in the build script, so the executable is built with a bigger heap by default. Where in the buildscript is the sbcl
binary invoked?
I'm also exploring other options on #sbcl
on Freenode.
Have a look at https://github.com/dimitri/pgloader/blob/master/Makefile#L9 for how to define the dynamic space size in the build process for pgloader. Currently, this only works for the legacy build system that relies on buildapp. I intend to switch to the new build system at src/save.lisp
sometime, and it's now used in both Travis and Docker builds to make sure it's reliable.
It should be fairly easy to add --dynamic-space-size $(DYNSIZE)
to SBCL builds when using the make save
target?
Yes, it doesn't seem hard to do that.
Nonetheless, you cannot use save.lisp
for setting the heap size. It is a Lisp file, so it is invoked once SBCL is already started, so the heap size is already set in stone.
I just ran into this issue this past weekend. I am using one linux server that has a load file that contains with 2 RDS instances:
LOAD DATABASE FROM mysql://confuser:PASSWORD@ac5enz3hx9sixl.cq1bwppz1j9k.us-east-1.rds.amazonaws.com/confluence INTO pgsql://confuser:PASSWORD@ac16f4chba0zdi6.cq1bwppz1j9k.us-east-1.rds.amazonaws.com/confluence
CAST type bigint when (= 20 precision) to bigint drop typemod
ALTER schema 'confluence' rename to 'public';
Here is my version: pgloader version "3.6.1" compiled with SBCL 1.3.1.debian
I was reading that i could possibly increase the --dynamic-space-size
but once I am in the sbcl terminal i am unsure the syntax on how to increase it. Does anyone know?
I was reading that i could possibly increase the
--dynamic-space-size
but once I am in the sbcl terminal i am unsure the syntax on how to increase it. Does anyone know?
Try the syntax:
$ sbcl --dynamic-space-size 8192
Where 8192
is your preferred heap size.
Looks like 8192 is out of range but i tried 3500 and that brought me to the terminal. Do you know if there is a way to verify this now?
I'll also try the migration to see if it gets me past this error for now to see if that helps.
I just tried it and it looks like i'm still running into the issue
Heap exhausted during garbage collection: 32768 bytes available, 115984 requested. Gen StaPg UbSta LaSta LUbSt Boxed Unboxed LB LUB !move Alloc Waste Trig WP GCs Mem-age 0: 0 0 0 0 0 0 0 0 0 0 0 85899345 0 0 0.0000 1: 0 0 0 0 0 0 0 0 0 0 0 85899345 0 0 0.0000 2: 42723 248946 0 0 1980 133485 0 47815 55 5150103072 855615968 85899345 0 0 1.5745 3: 5770 27142 0 0 1342 11646 1900 0 154 484057216 3792768 2000000 2474 0 0.4397 4: 0 0 0 0 0 0 0 0 0 0 0 2000000 0 0 0.0000 5: 0 0 0 0 0 0 0 0 0 0 0 2000000 0 0 0.0000 6: 0 0 0 0 3826 1329 0 0 0 168919040 0 2000000 3622 0 0.0000 Total bytes allocated = 7421137312 Dynamic-space-size bytes = 8589934592 GC control variables: GC-INHIBIT = true GC-PENDING = false STOP-FOR-GC-PENDING = false fatal error encountered in SBCL pid 1850(tid 140736755595008): Heap exhausted, game over.
I was able to get past this error. Basically at first i was specifying DYNSIZE=8192, and this got me the error above. As a test i upped the DYNSIZE=20000 and it got me past my error.
You can also try to limit the amount of memory that's used by pgloader by tweaking the batch size parameters, starting with prefetch rows = 10000
(it defaults to 100 000 which is a lot of rows if you have a table with either lots of columns or maybe very large values). See https://pgloader.readthedocs.io/en/latest/pgloader.html#batch-behaviour-options.
I was able to get past this error. Basically at first i was specifying DYNSIZE=8192, and this got me the error above. As a test i upped the DYNSIZE=20000 and it got me past my error.
Can you please let me know how that can be done from the command line. I am facing same issue while migrating my confluence database to postgresql.
hi @appy2401 if you are building from source, you can run something like
make DYNSIZE=20000 pgloader
which will build a binary that has the increased heap size.
if you want to tweak the prefetch rows parameter, you can build a command file that looks like this:
load database
from <source data>
into <destination postgres>
with prefetch rows = 10000;
Sorry, I don't understand how this is usable with huge database imports when the garbage collector barfs out. Given that I don't want to deal with the esoteric toolchain, what's the best option, try the python version 2?
In addition, pgloader --with "prefetch rows = 1000000" ./db.sqlite3 postgresql:///foo
barfs out with an unreadable error
#<CLOSURE (LAMBDA (COMMAND-LINE-ARGUMENTS::VALUE)
:IN
COMMAND-LINE-ARGUMENTS::MAKE-OPTION-ACTION) {10061971BB}> fell through ETYPECASE expression.
Wanted one of (NULL KEYWORD SYMBOL CONS).
The latter is a type-error and is a bug in logic, and this warrants a separate ticket. We would need a backtrace to debug it properly.
@dimitri How does one produce a Lisp backtrace with pgloader?
prefetch rows = 10000 worked for me.
Using prefetch rows, I could load more data, but the problem continues.
@dimitri How does one produce a Lisp backtrace with pgloader?
Using --debug
triggers a condition handling that uses trivial-backtrace and provides some level of information. The code being multi-threaded (using lparallel) sometimes makes those stack traces not as useful as one would like to, though I think I got it in a good shape after some tweaking.
Problem is persisting even with using smaller prefetch rows = 1000
.
Hi, I also faced this failure. It occurred when loading with 4 workers. Reducing workers and using only 2 helped.
Cheers, G.
Hi, I also faced this failure. It occurred when loading with 4 workers. Reducing workers and using only 2 helped.
Cheers, G.
How did you reduce workers ?
I have tried:
pgloader --debug --verbose --logfile=./pgloader.log --with "batch concurrency = 1 " mysql://db:db@192.168.255.40/db postgresql://db:db@localhost
Strangely I think that removing the --debug
flag solved this issue in my case... is that possible?! (Sorry I have nearly lost my mind tweaking my loadfile/banging my head against this wall, i.e. Heap exhausted during garbage collection
error printed and dropping into the backtrace debugger.)
Reduce default prefetch rows
works for me.
Strangely I think that removing the
--debug
flag solved this issue in my case...
If it affecting the total heap usage, then that's possible.
@mecampbellsoup thank you. I removed the --debug
flag and the process started without any problems. Thanks for your comment, I was about to give up.
You can also try to limit the amount of memory that's used by pgloader by tweaking the batch size parameters, starting with
prefetch rows = 10000
(it defaults to 100 000 which is a lot of rows if you have a table with either lots of columns or maybe very large values). See https://pgloader.readthedocs.io/en/latest/pgloader.html#batch-behaviour-options.
This one worked for me. Here is the sample command:
pgloader --with "prefetch rows = 10000" mysql://user_name:password@127.0.0.1/db_name postgresql://user_name:password@127.0.0.1/db_name
I still got the "Heap exhausted during garbage collection: 65536 bytes available, 75968 requested." when migrating about 1.5 GB data, even set the prefetch rows = 1000
Tried to build from the source by command make DYNSIZE=8192 save
but It still didn't work out. I'm wondering where is value 65536 bytes
from?
"64k available" is how many bytes left on the heap at the time the call was made. Simply, it comes out of the heap manager itself.
Finally, setting prefetch rows = 100
works in my case. For those who also face this problem, try reducing prefetch rows. Btw, I also got error debugger invoked on a PGLOADER.CONNECTION:DB-CONNECTION-ERROR in thread, and adding option workers = 8, concurrency = 1
solved that. Thanks @AnrDaemon for opening this issue, Thanks everyone contributing to the solution!
I'm currently also experiencing the heap exhausted
issue.
I tried building pgloading (master branch) using make DYNSIZE=60000 pgloader
, which should allocate 60Gb to sbcl when running pgloader. FYI: I'm running this on a host with 128Gb of ram, so no system issue here.
I'm running pgloader with 10 workers, concurrency of 1 and 10k prefetch rows. The max memory usage on the system is less than 1Gb before heap exhaustion.
Two different errors are reported, no pattern when one error or another occurs:
First error:
Heap exhausted during garbage collection: 80 bytes available, 96 requested.
Gen Boxed Code Raw LgBox LgCode LgRaw Pin Alloc Waste Trig WP GCs Mem-age
fatal error encountered in SBCL pid 14277 tid 14289:
GC invariant lost, file "gencgc.c", line 488
Second error:
Heap exhausted during garbage collection: 49152 bytes available, 63808 requested.
Gen Boxed Code Raw LgBox LgCode LgRaw Pin Alloc Waste Trig WP GCs Mem-age
2 590 0 12842 0 0 3151 26 249156768 22539104 10737418 0 0 2.5581
3 539 0 6726 1335 0 12490 1286 325262496 20276064 2000000 1590 0 0.0000
4 0 0 0 0 0 0 0 0 0 2000000 0 0 0.0000
5 0 0 0 0 0 0 0 0 0 2000000 0 0 0.0000
6 3317 13 2765 604 0 324 0 113057072 2007760 2000000 6836 0 0.0000
7 228 0 7681 0 0 12687 0 314159808 23285056 2000000 0 0 0.0000
Total bytes allocated = 1001636144
Dynamic-space-size bytes = 1073741824
GC control variables:
*GC-INHIBIT* = true
*GC-PENDING* = false
*STOP-FOR-GC-PENDING* = false
fatal error encountered in SBCL pid 14305 tid 14315:
Heap exhausted, game over.
The second error seems to report a dynamic-space-size
of 1073741824 bytes, which equals 1Gb. Very different from the value specified in DYNSIZE.
FYI: I'm trying to switch to a CCL based build, but fail due to #1479
Quick followup: it seems that I can pass the dynamic-space-size
option directly to pgloader. Memory usage output seems to confirm that the virtual memory for the pgloader processes is now 60Gb.
@MichaelAnckaert how did you pass that option to pgloader
?
@MichaelAnckaert @philsmy It hurts my heart to be at this same point you all were, trying to set the dynamic-space-size to a larger number.
Please come back and tell us how you did this. 🙏🏾
pgloader --dynamic-space-size 262144 -v migrate.load
did actually increase memory usage and not crash. Pgloader eventually used around 34GB to transfer 11.7GB successfully.
I am unsure if this is a problem in pgloader or I just have to throw more memory at it :smile:
LOAD DATABASE
FROM mysql://migrate:migrate@192.168.0.100/test
INTO postgresql://postgres:123456@192.168.0.100/test
WITH prefetch rows = 100,
max parallel create index = 1;
This worked for me
Running pgloader --dynamic-space-size ...
works for me too.
Another variant that works is passing --dynamic-space-size $(DYNSIZE)
to all invocations of sbcl
:
diff --git a/pgloader/Makefile b/pgloader/Makefile
index 27ed800..cfd31de 100644
--- a/pgloader/Makefile
+++ b/pgloader/Makefile
@@ -47,7 +47,7 @@ BUILDAPP = $(BUILDAPP_SBCL)
BUILDAPP_OPTS = --require sb-posix \
--require sb-bsd-sockets \
--require sb-rotate-byte
-CL_OPTS = --noinform --no-sysinit --no-userinit
+CL_OPTS = --dynamic-space-size $(DYNSIZE) --noinform --no-sysinit --no-userinit
else
BUILDAPP = $(BUILDAPP_CCL)
CL_OPTS = --no-init
Just want to add an update to this issue in Windows environment, I re-encountered it today after it was working fine 1 month ago and this thread helped a lot.
Seems like Docker Desktop for Windows updated at some point to use resource allocations from WSL2 instead of the Docker Settings themselves.
I tried all the configuration tips in this thread but what finally worked was turning swap off in the WSL2 config. (found in "c:/Users/YourUser/.wslconfig" create if it doesn't exist)
[wsl2]
memory=48GB # Limits VM memory in WSL 2
processors=16 # Limits the number of processors
swap=0 # Disables swap
Just an update in case anyone finds this useful in future.
While trying to run a full new import using fresh pgloader container, I stumbled upon
I can assert that nothing has significantly changed on my end, this is the same exact unaltered files I've imported last time, and the only change was in loader script, and it was minimal: