Open jasalt opened 1 month ago
Seeing if I can make some sense of this, running it locally and online to differentiate better if it's just my setup issue.
In some cases there is a log error, but UI shows that scrape returns fine with >100
value:
[0] GET /api/keywords?domain=example.com
[0] [ERROR] Scraping Keyword : example com domain keyword . Error: undefined
[0] [ERROR_MESSAGE]: TypeError: Cannot read properties of undefined (reading 'load')
[0] at extractScrapedResult (/app/.next/server/chunks/941.js:313:63)
[0] at scrapeKeywordFromGoogle (/app/.next/server/chunks/941.js:278:100)
[0] at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
[0] at async refreshAndUpdateKeyword (/app/.next/server/chunks/941.js:79:34)
[0] at async refreshAndUpdateKeywords (/app/.next/server/chunks/941.js:59:37)
[0] [SUCCESS] Updating the Keyword: example com domain keyword
[0] time taken: 18342.214424001053ms
I added some debug prints before const $ = cheerio.load(content);
in utils/scraper.ts
and it seems content
does include html. Confused about the undefined error, but not too familiar how TS works here.
I can confirm this issue running serpbear in a locale docker environment.
While testing the application, I noted that some keywords don't return their correct ranking but return >100 instead, even though the right hand side-panel inspector shows that site appears earlier in results.
Example screenshot:
This seems to happen quite often, maybe 40% of times. Domain name characters in example are bit messed up because ä in url, but problem is the same with all domains I tested. Same result also when using either Scraping Robot or Serply, so thinking maybe there is a configuration issue or something.
Not very familiar with TypeScript, wondering what would be correct way to debug further.