issues
search
everypolitician
/
scraped_page_archive
Create an archive of HTML pages scraped by a Ruby scraper
MIT License
1
stars
0
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
Configure git user.name and user.email
#15
chrismytton
closed
8 years ago
0
Only output the no-git-repo warning once
#14
chrismytton
opened
8 years ago
0
Allow HTTP connections when no VCR cassette in use
#13
chrismytton
closed
8 years ago
0
Dont assume we're operating in a git repo
#12
chrismytton
closed
8 years ago
0
Allow http requests when no VCR cassette is in use
#11
chrismytton
closed
8 years ago
0
Don't assume we're in a git repo
#10
chrismytton
closed
8 years ago
0
Add git support
#9
chrismytton
closed
8 years ago
0
Switch from open-uri-cached to VCR
#8
chrismytton
closed
8 years ago
1
Make open-uri hooks optional
#7
chrismytton
closed
8 years ago
0
Don't always override open-uri
#6
chrismytton
closed
8 years ago
0
Configure Travis CI
#5
chrismytton
closed
8 years ago
0
What is the best way to test this gem?
#4
chrismytton
opened
8 years ago
0
WIP Optionally refresh the cache each time a page is scraped
#3
chrismytton
closed
8 years ago
1
Add any existing files open-uri-cached knows about
#2
chrismytton
opened
8 years ago
1
Always try and update the archived page
#1
chrismytton
closed
8 years ago
1
Previous