There seems to be a limit on the number of times we can hit a sparql endpoint before it starts declining requests. Seems it basically rejects requests from our IP for a certain period of time. This is causing the process to extract loadable data manual, cumbersome, time consuming.
Wrap the process in method that batches studies in groups of 1000, waiting 5 minutes between each batch. Then we can create all files to upload to wikidata with one command.
There seems to be a limit on the number of times we can hit a sparql endpoint before it starts declining requests. Seems it basically rejects requests from our IP for a certain period of time. This is causing the process to extract loadable data manual, cumbersome, time consuming.
Wrap the process in method that batches studies in groups of 1000, waiting 5 minutes between each batch. Then we can create all files to upload to wikidata with one command.