Get rid of the need for a seperate blueprint for caching. It's now built into webcrawler itself and uses requests backed by a ZODB based requests_cache cache (which should be released as its own package I guess).
Main reasons for doing this
caching code was complex and buggy. It was only partially useful to be able to view the html files on disk.
urllib didn't handle cookies. requests is nicer.
cache didn't handle caching headers (if we wanted to obey them in the future)
Get rid of the need for a seperate blueprint for caching. It's now built into webcrawler itself and uses requests backed by a ZODB based requests_cache cache (which should be released as its own package I guess). Main reasons for doing this
Also adds alias url handling back in.