Since this is basically a scraper project, our fate lies totally in the hands of the sites we scrape from. Things can, and will, break any time. (batoto recently changed its domain from batoto.net to bato.to - ouch)
What we need right now:
At least full coverage of scraping logic (all those things inside the sites package). Do proper edge cases. Don't be lazy!
2 separate suites of tests: "quick" and "slow". Since sending direct requests to sites is slow, we'll probably save their HTML as-is to our git repo.
"quick" suite runs tests against said saved HTML and will be run by developers as they, well, develop.
"slow" suite will request the actual manga site to get the latest HTML and run its tests against that. It will be slow, so we'll probably want to run this suite on travis-ci.
Since this is basically a scraper project, our fate lies totally in the hands of the sites we scrape from. Things can, and will, break any time. (batoto recently changed its domain from batoto.net to bato.to - ouch)
What we need right now:
sites
package). Do proper edge cases. Don't be lazy!