apify / crawlee

Crawlee—A web scraping and browser automation library for Node.js to build reliable crawlers. In JavaScript and TypeScript. Extract data for AI, LLMs, RAG, or GPTs. Download HTML, PDF, JPG, PNG, and other files from websites. Works with Puppeteer, Playwright, Cheerio, JSDOM, and raw HTTP. Both headful and headless mode. With proxy rotation.
https://crawlee.dev
Apache License 2.0
15.74k stars 676 forks source link

Bug? PlaywrightCrawler enqueueLinks fails after WWW redirect. #2513

Open obsidience opened 5 months ago

obsidience commented 5 months ago

Which package is this bug report for? If unsure which one to select, leave blank

None

Issue description

Hi all,

Is the following a bug? I'm noticing that context.enqueueLinks seems to fail if the URL browsed has a WWW redirect. When this occurs, it looks like the selector succeeds to extract URL's however there's a "createFilteredRequests" call within enqueue_links.js that uses a enqueueStrategyPattern of "{glob: 'http{s,}://domain.com/**'}" and, because the glob doesn't have a WWW prefix, it fails.

It looks like this may be caused by enqueue_links.js resolveBaseUrlForEnqueueLinksFiltering() assuming that the sanest option would be to assume "same origin", but wouldn't using "same domain" be more sane for a typical crawler as http->https and www redirects are common?

Thanks for your help!

Code sample

import { PlaywrightCrawler, Dataset } from 'crawlee';

const crawler = new PlaywrightCrawler({
    async requestHandler(context) {
        await context.enqueueLinks({
            selector: 'a[slot="full-post-link"]', // fails
            //globs: ['**/comments/**'], // succeeds
        });
    },
    headless: false,
    launchContext: {
        launchOptions: {
            slowMo: 500,
        },
    },
});

await crawler.run(['https://reddit.com/r/legal']); // note: this is missing "www."

Package version

3.10.2

Node.js version

20.13.1

Operating system

Win11

Apify platform

I have tested this on the next release

No response

Other context

No response

toanphan19 commented 5 months ago

Hi, we are having the same issue as well. According to the docs the default configuration should not filter out links to the same hostname but different subdomain (www). Hope this get fixed soon.