When the scraper encounters any sort of error (e.g. a blank HTTP response), the whole script hangs. Add some exception handling so the script can continue and just skip over that one page.
Also, from #223
because the different parts of the scraper mostly run asynchronously, errors are not being properly thrown from the appropriate thread. This leads to errors that can be difficult to debug, because the apparent cause of a crash will be the fact that one thread doesn't return a students list for example, when in fact the actual root cause is something more specific that happened in that thread several thousand lines of logs ago.
When the scraper encounters any sort of error (e.g. a blank HTTP response), the whole script hangs. Add some exception handling so the script can continue and just skip over that one page.
Also, from #223