agrc / palletjack

A library for updating AGOL data from various external sources
MIT License
12 stars 0 forks source link

deps: update pyogrio requirement from <0.8,>=0.6 to >=0.6,<0.9 in the safe-dependencies group #93

Closed dependabot[bot] closed 3 weeks ago

dependabot[bot] commented 1 month ago

Updates the requirements on pyogrio to permit the latest version. Updates pyogrio to 0.8.0

Release notes

Sourced from pyogrio's releases.

Version v0.8.0

Improvements

  • Support for writing based on Arrow as the transfer mechanism of the data from Python to GDAL (requires GDAL >= 3.8). This is provided through the new pyogrio.raw.write_arrow function, or by using the use_arrow=True option in pyogrio.write_dataframe (#314, #346).
  • Add support for fids filter to read_arrow and open_arrow, and to read_dataframe with use_arrow=True (#304).
  • Add some missing properties to read_info, including layer name, geometry name and FID column name (#365).
  • read_arrow and open_arrow now provide GeoArrow-compliant extension metadata, including the CRS, when using GDAL 3.8 or higher (#366).
  • The open_arrow function can now be used without a pyarrow dependency. By default, it will now return a stream object implementing the Arrow PyCapsule Protocol (i.e. having an __arrow_c_stream__method). This object can then be consumed by your Arrow implementation of choice that supports this protocol. To keep the previous behaviour of returning a pyarrow.RecordBatchReader, specify use_pyarrow=True (#349).
  • Warn when reading from a multilayer file without specifying a layer (#362).
  • Allow writing to a new in-memory datasource using io.BytesIO object (#397).

Bug fixes

  • Fix error in write_dataframe if input has a date column and non-consecutive index values (#325).
  • Fix encoding issues on windows for some formats (e.g. ".csv") and always write ESRI Shapefiles using UTF-8 by default on all platforms (#361).
  • Raise exception in read_arrow or read_dataframe(..., use_arrow=True) if a boolean column is detected due to error in GDAL reading boolean values for FlatGeobuf / GPKG drivers (#335, #387); this has been fixed in GDAL >= 3.8.3.
  • Properly ignore fields not listed in columns parameter when reading from the data source not using the Arrow API (#391).
  • Properly handle decoding of ESRI Shapefiles with user-provided encoding option for read, read_dataframe, and open_arrow, and correctly encode Shapefile field names and text values to the user-provided encoding for write and write_dataframe (#384).
  • Fixed bug preventing reading from bytes or file-like in read_arrow / open_arrow (#407).

Packaging

  • The GDAL library included in the wheels is updated from 3.7.2 to GDAL 3.8.5.

Potentially breaking changes

  • Using a where expression combined with a list of columns that does not include

... (truncated)

Changelog

Sourced from pyogrio's changelog.

0.8.0 (2024-05-06)

Improvements

  • Support for writing based on Arrow as the transfer mechanism of the data from Python to GDAL (requires GDAL >= 3.8). This is provided through the new pyogrio.raw.write_arrow function, or by using the use_arrow=True option in pyogrio.write_dataframe (#314, #346).
  • Add support for fids filter to read_arrow and open_arrow, and to read_dataframe with use_arrow=True (#304).
  • Add some missing properties to read_info, including layer name, geometry name and FID column name (#365).
  • read_arrow and open_arrow now provide GeoArrow-compliant extension metadata, including the CRS, when using GDAL 3.8 or higher (#366).
  • The open_arrow function can now be used without a pyarrow dependency. By default, it will now return a stream object implementing the Arrow PyCapsule Protocol (i.e. having an __arrow_c_stream__method). This object can then be consumed by your Arrow implementation of choice that supports this protocol. To keep the previous behaviour of returning a pyarrow.RecordBatchReader, specify use_pyarrow=True (#349).
  • Warn when reading from a multilayer file without specifying a layer (#362).
  • Allow writing to a new in-memory datasource using io.BytesIO object (#397).

Bug fixes

  • Fix error in write_dataframe if input has a date column and non-consecutive index values (#325).
  • Fix encoding issues on windows for some formats (e.g. ".csv") and always write ESRI Shapefiles using UTF-8 by default on all platforms (#361).
  • Raise exception in read_arrow or read_dataframe(..., use_arrow=True) if a boolean column is detected due to error in GDAL reading boolean values for FlatGeobuf / GPKG drivers (#335, #387); this has been fixed in GDAL >= 3.8.3.
  • Properly ignore fields not listed in columns parameter when reading from the data source not using the Arrow API (#391).
  • Properly handle decoding of ESRI Shapefiles with user-provided encoding option for read, read_dataframe, and open_arrow, and correctly encode Shapefile field names and text values to the user-provided encoding for write and write_dataframe (#384).
  • Fixed bug preventing reading from bytes or file-like in read_arrow / open_arrow (#407).

Packaging

  • The GDAL library included in the wheels is updated from 3.7.2 to GDAL 3.8.5.

Potentially breaking changes

  • Using a where expression combined with a list of columns that does not include

... (truncated)

Commits
  • 46c35a7 RLS: v0.8.0
  • f80bc8f TST/CLN: Replace all tmpdir / os.path operations in tests with pathlib.Path (...
  • 893f955 ENH: refactor handling of reading from in-memory dataset (#407)
  • 16b62b3 Expose the Arrow read and write function top-level (#409)
  • 456e6ea ENH: allow writing without geometry using Arrow (#408)
  • 6b3d3dc ENH: enable support for writing to memory (#397)
  • 246ca84 TST: fix sdist tests skipping of arrow writing (#404)
  • ed97aaa ENH: allow using Arrow writing in pyogrio.write_dataframe (use_arrow=True opt...
  • ddaccd1 RLS/BLD: Ensure VCPKG brings in iconv library (#399)
  • f5fc7ce Refactor cleanup of GDAL objects / close of dataset on write (#396)
  • Additional commits viewable in compare view


Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot show ignore conditions` will show all of the ignore conditions of the specified dependency - `@dependabot ignore major version` will close this group update PR and stop Dependabot creating any more for the specific dependency's major version (unless you unignore this specific dependency's major version or upgrade to it yourself) - `@dependabot ignore minor version` will close this group update PR and stop Dependabot creating any more for the specific dependency's minor version (unless you unignore this specific dependency's minor version or upgrade to it yourself) - `@dependabot ignore ` will close this group update PR and stop Dependabot creating any more for the specific dependency (unless you unignore this specific dependency or upgrade to it yourself) - `@dependabot unignore ` will remove all of the ignore conditions of the specified dependency - `@dependabot unignore ` will remove the ignore condition of the specified dependency and ignore conditions