danielacraciun / scrape-rec

Real estate data gathering and notification
4 stars 2 forks source link

Bump scrapy from 1.6.0 to 2.6.1 #52

Closed dependabot[bot] closed 2 years ago

dependabot[bot] commented 2 years ago

Bumps scrapy from 1.6.0 to 2.6.1.

Release notes

Sourced from scrapy's releases.

2.6.1

Fixes a regression introduced in 2.6.0 that would unset the request method when following redirects.

2.6.0

  • Security fixes for cookie handling (see details below)
  • Python 3.10 support
  • asyncio support is no longer considered experimental, and works out-of-the-box on Windows regardless of your Python version
  • Feed exports now support pathlib.Path output paths and per-feed item filtering and post-processing

See the full changelog

Security bug fixes

  • When a Request object with cookies defined gets a redirect response causing a new Request object to be scheduled, the cookies defined in the original Request object are no longer copied into the new Request object.

    If you manually set the Cookie header on a Request object and the domain name of the redirect URL is not an exact match for the domain of the URL of the original Request object, your Cookie header is now dropped from the new Request object.

    The old behavior could be exploited by an attacker to gain access to your cookies. Please, see the cjvr-mfj7-j4j8 security advisory for more information.

    Note: It is still possible to enable the sharing of cookies between different domains with a shared domain suffix (e.g. example.com and any subdomain) by defining the shared domain suffix (e.g. example.com) as the cookie domain when defining your cookies. See the documentation of the Request class for more information.

  • When the domain of a cookie, either received in the Set-Cookie header of a response or defined in a Request object, is set to a public suffix <https://publicsuffix.org/>_, the cookie is now ignored unless the cookie domain is the same as the request domain.

    The old behavior could be exploited by an attacker to inject cookies from a controlled domain into your cookiejar that could be sent to other domains not controlled by the attacker. Please, see the mfjm-vh54-3f96 security advisory for more information.

2.5.1

Security bug fix:

If you use HttpAuthMiddleware (i.e. the http_user and http_pass spider attributes) for HTTP authentication, any request exposes your credentials to the request target.

To prevent unintended exposure of authentication credentials to unintended domains, you must now additionally set a new, additional spider attribute, http_auth_domain, and point it to the specific domain to which the authentication credentials must be sent.

If the http_auth_domain spider attribute is not set, the domain of the first request will be considered the HTTP authentication target, and authentication credentials will only be sent in requests targeting that domain.

If you need to send the same HTTP authentication credentials to multiple domains, you can use w3lib.http.basic_auth_header instead to set the value of the Authorization header of your requests.

If you really want your spider to send the same HTTP authentication credentials to any domain, set the http_auth_domain spider attribute to None.

Finally, if you are a user of scrapy-splash, know that this version of Scrapy breaks compatibility with scrapy-splash 0.7.2 and earlier. You will need to upgrade scrapy-splash to a greater version for it to continue to work.

2.5.0

  • Official Python 3.9 support
  • Experimental HTTP/2 support
  • New get_retry_request() function to retry requests from spider callbacks
  • New headers_received signal that allows stopping downloads early
  • New Response.protocol attribute

See the full changelog

... (truncated)

Changelog

Sourced from scrapy's changelog.

Scrapy 2.6.1 (2022-03-01)

Fixes a regression introduced in 2.6.0 that would unset the request method when following redirects.

.. _release-2.6.0:

Scrapy 2.6.0 (2022-03-01)

Highlights:

  • :ref:Security fixes for cookie handling <2.6-security-fixes>

  • Python 3.10 support

  • :ref:asyncio support <using-asyncio> is no longer considered experimental, and works out-of-the-box on Windows regardless of your Python version

  • Feed exports now support :class:pathlib.Path output paths and per-feed :ref:item filtering <item-filter> and :ref:post-processing <post-processing>

.. _2.6-security-fixes:

Security bug fixes


-   When a :class:`~scrapy.http.Request` object with cookies defined gets a
    redirect response causing a new :class:`~scrapy.http.Request` object to be
    scheduled, the cookies defined in the original
    :class:`~scrapy.http.Request` object are no longer copied into the new
    :class:`~scrapy.http.Request` object.
If you manually set the ``Cookie`` header on a
:class:`~scrapy.http.Request` object and the domain name of the redirect
URL is not an exact match for the domain of the URL of the original
:class:`~scrapy.http.Request` object, your ``Cookie`` header is now dropped
from the new :class:`~scrapy.http.Request` object.

The old behavior could be exploited by an attacker to gain access to your
cookies. Please, see the `cjvr-mfj7-j4j8 security advisory`_ for more
information.

.. _cjvr-mfj7-j4j8 security advisory: https://github.com/scrapy/scrapy/security/advisories/GHSA-cjvr-mfj7-j4j8

.. note:: It is still possible to enable the sharing of cookies between

</tr></table>

... (truncated)

Commits
  • 23537a0 Bump version: 2.6.0 → 2.6.1
  • fab3e90 Cover 2.6.1 in the release notes
  • d60636d Fix redirect handling regression
  • 84853c4 bandit: allow-list B324 for the time being
  • 6b63e7c Bump version: 2.5.0 → 2.6.0
  • e865c44 Merge pull request from GHSA-mfjm-vh54-3f96
  • 8ce01b3 Merge pull request from GHSA-cjvr-mfj7-j4j8
  • aa0306a Cover 2.6.0 in the release notes (#5399)
  • 08557e0 Pin old markupsafe when we pin old mitmproxy (#5427)
  • 3b42ccf Add a link to Discord (#5422)
  • Additional commits viewable in compare view


Dependabot compatibility score

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) - `@dependabot use these labels` will set the current labels as the default for future PRs for this repo and language - `@dependabot use these reviewers` will set the current reviewers as the default for future PRs for this repo and language - `@dependabot use these assignees` will set the current assignees as the default for future PRs for this repo and language - `@dependabot use this milestone` will set the current milestone as the default for future PRs for this repo and language You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/danielacraciun/scrape-rec/network/alerts).
dependabot[bot] commented 2 years ago

Superseded by #53.