scrapy-plugins / scrapy-playwright

🎭 Playwright integration for Scrapy
BSD 3-Clause "New" or "Revised" License
911 stars 101 forks source link

awswaf challenge http status 202 #268

Closed icaca closed 1 month ago

icaca commented 2 months ago

When awswaf questions the browser, it will return the page to http 202 and modify the page content to javascript. Then the page will initiate the corresponding request. If it passes the verification, the current page will be refreshed and 200 will be returned. When I use scrapy playwright, I encounter a page with http status 202 and does not wait for javascript to process.

There are a total of 4 requests and 2 get. 2 posts. GET https://936453fdc45b.507bb30a.us-west-2.token.awswaf.com/936453fdc45b/8c8c9a139a90/01574e66d2ee/challenge.js HTTP/2.0 GET https://936453fdc45b.507bb30a.us-west-2.token.awswaf.com/936453fdc45b/8c8c9a139a90/01574e66d2ee/inputs?client=browser HTTP/2.0 POST https://936453fdc45b.507bb30a.us-west-2.token.awswaf.com/936453fdc45b/8c8c9a139a90/01574e66d2ee/verify HTTP/2.0 POST https://936453fdc45b.507bb30a.us-west-2.token.awswaf.com/936453fdc45b/8c8c9a139a90/01574e66d2ee/telemetry HTTP/2.0

icaca commented 2 months ago

After my debugging, I found that scrapy did not send accept, which would cause the return of http code 202 to be empty. After adding it, the javascript will be returned correctly, but playwright does not wait for the web page to be verified.

elacuesta commented 2 months ago

This report is not actionable, please include a minimal, reproducible example.

icaca commented 2 months ago

from scrapy.spiders import Spider
import re
import scrapy
from urllib.parse import urlencode
from playwright.async_api import async_playwright

class PlaywrightSpider(Spider):
    name = "test01"
    custom_settings = {
        "PLAYWRIGHT_BROWSER_TYPE": "chromium",
        "PLAYWRIGHT_LAUNCH_OPTIONS": {
            "headless": False,
            "timeout": 20 * 1000,  # 20 seconds
        }
    }
    allowed_domains = [
        "cn.classic.warcraftlogs.com", "classic.warcraftlogs.com"
    ]

    players = [67849152]
    char_url = "https://cn.classic.warcraftlogs.com/character/id/{0}?mode=detailed&zone=1020#metric=dps"
    char_detail_url = "https://cn.classic.warcraftlogs.com/character/rankings-raids/{id}/default/1002/3/5000/5000/Any/rankings/0/0?dpstype=rdps&class=Any&signature={sign}"

    start_urls = [
        "https://cn.classic.warcraftlogs.com/character/id/67849152?mode=detailed&zone=1020#metric=dps"
    ]  # avoid using the default Scrapy downloader

    def start_requests(self):
        for player in self.players:
            eas_url = self.char_url.format(player)

            yield scrapy.Request(
                eas_url,
                meta={
                    "playwright": True,
                    "playwright_include_page": False,
                    "playwright_context_kwargs": {
                        "java_script_enabled": True,
                        "ignore_https_errors": True,
                    },
                    'id': player,
                    'referer': eas_url,
                },
                headers={
                    'Referer':
                    eas_url,
                    'agent':
                    'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/96.0.4664.110 Safari/537.36',
                    'accept':
                    'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.7',
                },
                callback=self.parse_player,
                errback=self.errback_close_page,
            )

    def parse_pass(self, response):
        pass

    async def parse_player(self, response, **kwargs):

        # page = response.meta["playwright_page"]
        # title = await page.title()

        # await page.close()
        # await page.context.close()
        # print("11111111111111", response.text)

        if response.status == 202:
            print("challenge")
            return

        meta = response.meta

        regex = r"rankings-raids.*signature=' \+ '(.*)'"
        res = re.search(regex, response.text)

        sign = None
        if res:
            sign = res.group(1)
        else:
            self.logger.info("ref=%s res=%s resp:%s", meta["referer"], res,
                             response.text)
            return

        meta["id"] = "67849152"

        # print('_token={}'.format(response.css("meta[name=csrf-token]::attr(content)").get()))
        self.logger.info(
            "_token=%s",
            response.css("meta[name=csrf-token]::attr(content)").get())

        yield response.follow(
            self.char_detail_url.format(id=meta["id"], sign=sign),
            method='POST',
            body=urlencode({
                '_token':
                response.css("meta[name=csrf-token]::attr(content)").get()
            }),
            headers={
                'Referer': self.char_url,
                'Content-Type':
                'application/x-www-form-urlencoded; charset=UTF-8'
            },
            meta=response.meta,
            #   dont_filter=True,
            #   priority=0,
            callback=self.parse_detail)
        pass

    def parse(self, response, **kwargs):
        # 'response' contains the page as seen by the browser
        return {"url": response.url}

    def parse_detail(self, response):
        # print(response.text)
        pass

    async def errback_close_page(self, failure):
        print(failure)
        # await page.close()
icaca commented 2 months ago

After several days of experiments, my guess is that playwright cannot pass the human-machine verification. Thank you very much for your help.

icaca commented 2 months ago

I want to debug several js requests after the page returns 202, including headers and body, to find out why it cannot pass human-computer verification. Because my request only returns parse_player once, those js requests in the middle will not enter my code. How should I debug this information. I originally wanted to use mitmproxy to capture the packet, but I found that chrome can obtain the request information, while scrapy seems to have some SSL errors and cannot capture everything.


Client TLS handshake failed. The client does not trust the proxy's certificate for 936453fdc45b.507bb30a.us-west-2.token.awswaf.com (OpenSSL Error([('SSL routines', '', 'ssl/tls alert certificate unknown')]))

[scrapy-playwright] INFO: Browser chromium launched
[scrapy-playwright] DEBUG: Browser context started: 'default' (persistent=False, remote=False)
[scrapy-playwright] DEBUG: [Context=default] New page created, page count is 1 (1 for all contexts)
[scrapy-playwright] DEBUG: [Context=default] Request: <GET https://cn.classic.warcraftlogs.com/character/id/67849152?mode=detailed&zone=1020> (resource type: document)
[scrapy-playwright] DEBUG: [Context=default] Response: <202 https://cn.classic.warcraftlogs.com/character/id/67849152?mode=detailed&zone=1020>
[scrapy-playwright] DEBUG: [Context=default] Request: <GET https://936453fdc45b.507bb30a.us-west-2.token.awswaf.com/936453fdc45b/8c8c9a139a90/01574e66d2ee/challenge.js> (resource type: script, referrer: https://cn.classic.warcraftlogs.com/)
[scrapy-playwright] DEBUG: [Context=default] Response: <200 https://936453fdc45b.507bb30a.us-west-2.token.awswaf.com/936453fdc45b/8c8c9a139a90/01574e66d2ee/challenge.js>
[scrapy.core.engine] DEBUG: Crawled (202) <GET https://cn.classic.warcraftlogs.com/character/id/67849152?mode=detailed&zone=1020#metric=dps> (referer: https://cn.classic.warcraftlogs.com/character/id/67849152?mode=detailed&zone=1020#metric=dps) ['playwright']
challenge
[scrapy.core.engine] INFO: Closing spider (finished)
elacuesta commented 2 months ago

If I understand correctly what you're trying to do, you could use playwright_page_event_handlers to handle the Playwright responses with the response event.