Open blinjrm opened 6 months ago
Hello and thank you for trying out AdaptivePlaywrightCrawler
! We'll be happy to hear any further feedback you might have in the future.
Regarding manually specifying the rendering type, we were considering adding it but we felt that it'll be better not to offer too many options at first. Furthermore, the general idea is that the crawler should do this automatically. We are open to discussing this though, we might end up with something cool in the end :slightly_smiling_face:
When considering this initially, our idea of the solution was another callback in the crawler options that would return a rendering type hint for a Request
object. What do you think about that?
- use HTTP crawling by default, but if the request is blocked (for example, finding the word 'captcha' in the loaded url), switch to JS rendering and try to unblock the page
This is, to an extent, done automatically - if a HTTP crawl throws an exception, it is retried in browser.
- navigate catalog pages of an e-commerce website with JS, but then extract data from product pages with HTTP only (product pages represent >90% of pages of such website hence to motivation to use HTTP only)
Since the data used for rendering type prediction is strictly categorized by request label, this should also work out of the box if you use different labels for product listings and details. It is true that this should be documented before we consider the feature stable.
- many more
Please elaborate if you like :slightly_smiling_face:
Thanks for your detailed answer, i think i understand better now:
→ We can crawl using HTTP by default, and if we try selecting an element that is not rendered or run a JS command (like scrolling), an error will be raised and the crawler will retry with JS rendering next, is that correct?
This would need tweaking the detectionRatio
to always run HTTP by default right?
In terms of API design i imagine combining a default, crawler level rendering parameter, and then tying the rendering type with the handler that will handle the request could be an easy solution, something like:
const crawler = new AdaptivePlaywrightCrawler({
renderJs: true, // renderingTypeDetectionRatio: 0.1,
requestHandler: router,
});
await enqueueLinks({
selector: "<selector>",
label: "<handler name>",
renderJs: false,
});
When considering this initially, our idea of the solution was another callback in the crawler options that would return a rendering type hint for a Request object. What do you think about that?
I don't really get this rendering type hint, i'd rather be in full control but maybe my use case is too specific.
→ We can crawl using HTTP by default, and if we try selecting an element that is not rendered or run a JS command (like scrolling), an error will be raised and the crawler will retry with JS rendering next, is that correct? This would need tweaking the
detectionRatio
to always run HTTP by default right?
Well, almost, just a few clarifications:
detectionRatio
.In terms of API design i imagine combining a default, crawler level rendering parameter, and then tying the rendering type with the handler that will handle the request could be an easy solution, something like:
const crawler = new AdaptivePlaywrightCrawler({ renderJs: true, // renderingTypeDetectionRatio: 0.1, requestHandler: router, });
await enqueueLinks({ selector: "<selector>", label: "<handler name>", renderJs: false, });
Well, specifying a crawler-wide rendering type default doesn't seem useful to me. If your results can be extracted with plain HTTP, the crawler will detect that soon enough, and the few initial browser crawls should not present a problem.
We may consider something like the renderJs
flag in the future though, that would make sense.
I don't really get this rendering type hint, i'd rather be in full control but maybe my use case is too specific.
Yeah, I believe we mean the same thing - by passing the hint, you'd basically enforce the rendering type for a particular request.
For our own internal spidering project we ended up building this functionality (before the 'adaptive' stuff was in place — it was a terrible kludge, but it still worked).
Our use case was a little different — lots of links that were 'mystery meat' and had PDF files, mixed with dynamically rendered JS pages that needed the full Playwright browser. We implemented a pre-page-load stage, where we made a HEAD request and checked the status code, mime types, etc of the response and used it to decide whether we should save a local binary file, log an error, or render the full page. It's not quite the same use case, but an example of how making the per-request decision logic a bit more accessible could be quite helpful.
Which package is the feature request for? If unsure which one to select, leave blank
@crawlee/playwright (PlaywrightCrawler)
Feature
Add the possibility to programmatically decide when to render JS → use HTTP crawling by default but if some condition is met switch to JS rendering.
Motivation
Use cases:
Ideal solution or implementation, and any additional constraints
Maybe adding a parameter in
enqueueLinks
to process the URL with JS rendering.Alternative solutions or implementations
No response
Other context
No response