When the application is hosted, it will often crash when it tries to scrape a site that does not like to be scraped. This is not ideal as I think it bet to not rely on libraries to monitor and restart the application in these cases. I would think it better to have a try/catch sort of implementation that prevents the whole application coming down in the event of cheerio receiving an error response.
Possible implementation
Some adjustment to the implementation/use of the cheerio library that allows for checking or catching the error before it brings the whole application down.
Additional information
Some investigation will be needed to pin-point precisely where the issue is and further experimentation with options of a solution.
What feature would you like to see?
When the application is hosted, it will often crash when it tries to scrape a site that does not like to be scraped. This is not ideal as I think it bet to not rely on libraries to monitor and restart the application in these cases. I would think it better to have a try/catch sort of implementation that prevents the whole application coming down in the event of cheerio receiving an error response.
Possible implementation
Some adjustment to the implementation/use of the cheerio library that allows for checking or catching the error before it brings the whole application down.
Additional information
Some investigation will be needed to pin-point precisely where the issue is and further experimentation with options of a solution.
There may be some useful information regarding options here: cheerio-crawler-options