Is your feature request related to a problem? Please describe.
Some of the data in EES that people are wanting to query will be have millions of rows, we should do some thinking around ways to speed up queries to handle this.
Describe the solution you'd like
Up the default page size to the maximum of 10,000 (feels an easy win)
Look at whether we can performance test and optimise the processing needed for each API request when in a loop
Describe alternatives you've considered
Probably a few options for optimising the code here.
Also could be worth looking at how we can allow the batch processes run in parallel if the users have that ability on their end.
Additional context
Tried innocently running query_dataset("7c0e9201-c7c0-ff73-bee4-304e731ec0e6", indicators = "bqZtT") and thought it was hanging. After turning on the messages I realised it was actually just taking a very long time to run as it was getting 3,730 pages!
With the default 1,000 results per page it fell over after 48 mins (about halfway through a 3.7million row file).
Running with 10,000 results per page it ran much faster and completed in 14 mins, though still not particularly speedy.
For comparison, fetching the same file from the CSV endpoint takes 24 seconds
Is your feature request related to a problem? Please describe.
Some of the data in EES that people are wanting to query will be have millions of rows, we should do some thinking around ways to speed up queries to handle this.
Describe the solution you'd like
Describe alternatives you've considered
Probably a few options for optimising the code here.
Also could be worth looking at how we can allow the batch processes run in parallel if the users have that ability on their end.
Additional context
Tried innocently running
query_dataset("7c0e9201-c7c0-ff73-bee4-304e731ec0e6", indicators = "bqZtT")
and thought it was hanging. After turning on the messages I realised it was actually just taking a very long time to run as it was getting 3,730 pages!With the default 1,000 results per page it fell over after 48 mins (about halfway through a 3.7million row file).
Running with 10,000 results per page it ran much faster and completed in 14 mins, though still not particularly speedy.
For comparison, fetching the same file from the CSV endpoint takes 24 seconds