timothyrenner / nuforc_sightings_data

Data collection and processing for the National UFO Reporting Center (NUFORC) database.
MIT License
35 stars 9 forks source link

Bump scrapy from 1.4.0 to 1.8.1 #13

Closed dependabot[bot] closed 2 years ago

dependabot[bot] commented 2 years ago

Bumps scrapy from 1.4.0 to 1.8.1.

Release notes

Sourced from scrapy's releases.

1.8.1

Security bug fix:

If you use HttpAuthMiddleware (i.e. the http_user and http_pass spider attributes) for HTTP authentication, any request exposes your credentials to the request target.

To prevent unintended exposure of authentication credentials to unintended domains, you must now additionally set a new, additional spider attribute, http_auth_domain, and point it to the specific domain to which the authentication credentials must be sent.

If the http_auth_domain spider attribute is not set, the domain of the first request will be considered the HTTP authentication target, and authentication credentials will only be sent in requests targeting that domain.

If you need to send the same HTTP authentication credentials to multiple domains, you can use w3lib.http.basic_auth_header instead to set the value of the Authorization header of your requests.

If you really want your spider to send the same HTTP authentication credentials to any domain, set the http_auth_domain spider attribute to None.

Finally, if you are a user of scrapy-splash, know that this version of Scrapy breaks compatibility with scrapy-splash 0.7.2 and earlier. You will need to upgrade scrapy-splash to a greater version for it to continue to work.

1.7.4

Revert the fix for #3804 (#3819), which has a few undesired side effects (#3897, #3976).

1.7.3

Enforce lxml 4.3.5 or lower for Python 3.4 (#3912, #3918)

1.7.2

Fix Python 2 support (#3889, #3893, #3896)

1.7.0

Highlights:

  • Improvements for crawls targeting multiple domains
  • A cleaner way to pass arguments to callbacks
  • A new class for JSON requests
  • Improvements for rule-based spiders
  • New features for feed exports

See the full change log

1.6.0

Highlights:

  • Better Windows support
  • Python 3.7 compatibility
  • Big documentation improvements, including a switch from .extract_first() + .extract() API to .get() + .getall() API
  • Feed exports, FilePipeline and MediaPipeline improvements
  • Better extensibility: item_error and request_reached_downloader signals; from_crawler support for feed exporters, feed storages and dupefilters.
  • scrapy.contracts fixes and new features
  • Telnet console security improvements, first released as a backport in Scrapy 1.5.2 (2019-01-22)
  • Clean-up of the deprecated code
  • Various bug fixes, small new features and usability improvements across the codebase.

Full changelog is in the docs.

... (truncated)

Changelog

Sourced from scrapy's changelog.

Scrapy 1.8.1 (2021-10-05)

  • Security bug fix:

    If you use :class:~scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware (i.e. the http_user and http_pass spider attributes) for HTTP authentication, any request exposes your credentials to the request target.

    To prevent unintended exposure of authentication credentials to unintended domains, you must now additionally set a new, additional spider attribute, http_auth_domain, and point it to the specific domain to which the authentication credentials must be sent.

    If the http_auth_domain spider attribute is not set, the domain of the first request will be considered the HTTP authentication target, and authentication credentials will only be sent in requests targeting that domain.

    If you need to send the same HTTP authentication credentials to multiple domains, you can use :func:w3lib.http.basic_auth_header instead to set the value of the Authorization header of your requests.

    If you really want your spider to send the same HTTP authentication credentials to any domain, set the http_auth_domain spider attribute to None.

    Finally, if you are a user of scrapy-splash_, know that this version of Scrapy breaks compatibility with scrapy-splash 0.7.2 and earlier. You will need to upgrade scrapy-splash to a greater version for it to continue to work.

.. _scrapy-splash: https://github.com/scrapy-plugins/scrapy-splash

.. _release-1.8.0:

Scrapy 1.8.0 (2019-10-28)

Highlights:

  • Dropped Python 3.4 support and updated minimum requirements; made Python 3.8 support official
  • New :meth:Request.from_curl <scrapy.http.Request.from_curl> class method
  • New :setting:ROBOTSTXT_PARSER and :setting:ROBOTSTXT_USER_AGENT settings
  • New :setting:DOWNLOADER_CLIENT_TLS_CIPHERS and :setting:DOWNLOADER_CLIENT_TLS_VERBOSE_LOGGING settings

... (truncated)

Commits
  • 283e90e Bump version: 1.8.0 → 1.8.1
  • 99ac4db Cover 1.8.1 in the release notes
  • 1635134 Small documentation fixes.
  • b01d69a Add http_auth_domain to HttpAuthMiddleware.
  • 4183925 Travis CI → GitHub Actions
  • be2e910 Bump version: 1.7.0 → 1.8.0
  • 94f060f Cover Scrapy 1.8.0 in the release notes (#3952)
  • 18b808b Merge pull request #4092 from further-reading/master
  • 93e3dc1 [test_downloadermiddleware_httpcache.py] Cleaning text
  • b73d217 [test_downloadermiddleware_httpcache.py] Fixing pytest mark behaviour
  • Additional commits viewable in compare view


Dependabot compatibility score

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) - `@dependabot use these labels` will set the current labels as the default for future PRs for this repo and language - `@dependabot use these reviewers` will set the current reviewers as the default for future PRs for this repo and language - `@dependabot use these assignees` will set the current assignees as the default for future PRs for this repo and language - `@dependabot use this milestone` will set the current milestone as the default for future PRs for this repo and language You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/timothyrenner/nuforc_sightings_data/network/alerts).