Open GoogleCodeExporter opened 9 years ago
This is something that we realize we need to address but is not going to be a
quick fix. If you're dealing with large result sizes, your best option is
generally to run an extract job to csv.
You also might consider parallelizing the data fetch operations. If you have 2
million rows, you could run 10 parallel threads that each fetch different
portions of the table (by using row indices). This could help drop the time to
read the table by an order of magnitude.
Original comment by tig...@google.com
on 19 Dec 2014 at 12:22
Original issue reported on code.google.com by
nir...@amplitude.com
on 16 Dec 2014 at 12:49