Open uofirob opened 5 months ago
Not solved, but maybe helpful. I tried a request on the first link as this is how the _abstract.py file pulls the HTML for sites we pass to the recipe_scrapers scrape_me function. It looks like the server understood the request but refuses to authorize it. The second site also gives a 403. I tried some different headers and setting a referer and still got 403 responses. e.g,
$ import requests $ url = 'https://www.blueapron.com/recipes/romesco-shrimp-couscous-skillet-with-spinach-tomatoes' $ r = requests.get(url) $ r <Response [403]> $ r.content.decode() '
Please enable JS and disable any ad blocker
\n'
Pre-filing checks
The URL of the recipe(s) that are not being scraped correctly
The results you expect to see The recipe is successfully scraped
The results (including any Python error messages) that you are seeing
When Mealie tries to scrape these urls, the message is "recipe_scrapers was unable to scrape this URL"