Switch from goroutines storing rows in memory to use a channel with a buffer of 1 to hold single rows so we can dump tables that far exceed your system memory. Now we have a goroutine blocking and then filling a buffer for the dump.
The buffer of 1 is what maintains the speed that this has had with the memory improvements. This way we can read from MySQL and write to the file in parallel and not block either. Making this a typical consumer producer problem for memory. The table rows are dynamically allocated so usage is usually fairly small.
NOTE: MySQL rows have a max of 1GB so we can theoretically use 3GB of heap. If the buffer is full and there is one waiting to get put into the buffer. The io.Writter is writing to a memory buffer for the previous in the channel.
I have gotten no where near this limit being only able to put rows of 128MB in MySQL before I got weird server unavailable errors and sometimes corrupted MySQL databases entirely.
Switch from goroutines storing rows in memory to use a channel with a buffer of 1 to hold single rows so we can dump tables that far exceed your system memory. Now we have a goroutine blocking and then filling a buffer for the dump.
The buffer of 1 is what maintains the speed that this has had with the memory improvements. This way we can read from MySQL and write to the file in parallel and not block either. Making this a typical consumer producer problem for memory. The table rows are dynamically allocated so usage is usually fairly small.