Closed stucka closed 2 weeks ago
There may be multiple versions of these nextrequest sites, and definitely syntax differences.
Looking at a BART example:
Absolutely need to build in pagination; this thing has more than 1,100 files, vs. a pagination limit of 50.
One uses folder_id and one uses request_id.
Checking to see if "http" is in a URL isn't a foolproof method, as this redirects to a // AWS entry. Perhaps there's a test in urlparser that's less hacky?
Minimum implementation might be something like this:
Given a link to a NextRequest documents folder, return a proposed filename and the JSON contents
Given a base NextRequest URL and a NextRequest JSON filename, return a list of Metadata objects ready for parsing
A wrapper to handle both of the above -- given a base filepath and a URL, download file if needed to the proper path and parse the contents to generate the Metadata objects. Should include an option to overwrite files if they already exist.
This should reuse functions in cache.py and if necessary utils.py for filehandling.