Open th-karanveer opened 2 months ago
https://github.com/astelmach01/GPT-crawler-backend-python/blob/813d528b7c08b104011030acab392ae0866db4be/app/web/api/core/crawl.py#L90
I agree but If 200 links are found on a page, it will block too long. We need something like waitForAll() (to run parallelly) or MAXIMUM_LINK_COUNT_PER_PAGE. The depth setting is not enough.
waitForAll()
MAXIMUM_LINK_COUNT_PER_PAGE
https://github.com/astelmach01/GPT-crawler-backend-python/blob/813d528b7c08b104011030acab392ae0866db4be/app/web/api/core/crawl.py#L90