Closed serenalotreck closed 2 years ago
It is a svn repo, and it is set for anonymous checkout (but not commits), so you can do a: svn co http://palea.cgrb.oregonstate.edu/svn/associations
Or it should be possible to do it with wget, https://unix.stackexchange.com/questions/117988/wget-with-wildcards-in-http-downloads Limited testing, but this appears to work: wget -r -nH --cut-dirs=2 -l2 -np "http://palea.cgrb.oregonstate.edu/svn/associations" -A "*.assoc"
The wget
command worked, thanks so much!
I'm looking to be able to scrape/download in bulk the most recent versions of all the
.assoc
files from the data repository. Is there a good way to do this that doesn't involve me having to copy-paste all the file names into a file to use with a bash/python script that iteratively downloads each file?Thanks!