The code for the scraper is fairly repetitive but I've been told the API isn't very stable so it seemed best to isolate the various pieces so there weren't too many dependencies. The scrape method dumps the relevant JSON to a file and then the process method pulls that back in similar to the doilist plugin. Otherwise the overall process is basically adapted from the Pubmed scraper.
There is some repetition also in the tests for similar reasons as I wanted to isolate the individual API calls from the running of the processing step. Changes in the API should lead to failures that should be relatively easy to test for.
The code for the scraper is fairly repetitive but I've been told the API isn't very stable so it seemed best to isolate the various pieces so there weren't too many dependencies. The scrape method dumps the relevant JSON to a file and then the process method pulls that back in similar to the doilist plugin. Otherwise the overall process is basically adapted from the Pubmed scraper.
There is some repetition also in the tests for similar reasons as I wanted to isolate the individual API calls from the running of the processing step. Changes in the API should lead to failures that should be relatively easy to test for.