Open astrojuanlu opened 6 months ago
Interesting to learn about why Rust choose object store instead of a general filesystem interface. One things that come into mind is how to make the Kedro Versioning works for this as we almost use fs.glob
everywhere. We may need something new for handling object store.
To be fair local + object store is the majority case. We do handle some niche fs like hdfs (was common 10 years ago, almost non-exist now), sftp etc.
The good thing is that, if you drop fsspec by leveraging the underlying mechanism of the target library, writing a new custom dataset is trivial
https://github.com/astrojuanlu/workshop-kedro-huggingface/blob/7666a33/delta_polars_dataset.py
Today I showed this fsspec-free dataset to a user and they were happy to see how easy it is to write:
import typing as t
import polars as pl
from kedro.io import AbstractDataset
class DeltaPolarsDataset(AbstractDataset[pl.DataFrame, pl.DataFrame]):
"""``DeltaDataset`` loads/saves data from/to a Delta Table using an underlying
filesystem (e.g.: local, S3, GCS). It returns a Polars dataframe.
"""
DEFAULT_LOAD_ARGS: dict[str, t.Any] = {}
DEFAULT_SAVE_ARGS: dict[str, t.Any] = {}
def __init__(
self,
filepath: str,
load_args: dict[str, t.Any] | None = None,
save_args: dict[str, t.Any] | None = None,
credentials: dict[str, t.Any] | None = None,
storage_options: dict[str, t.Any] | None = None,
metadata: dict[str, t.Any] | None = None,
):
self._filepath = filepath
self._load_args = {**self.DEFAULT_LOAD_ARGS, **(load_args or {})}
self._save_args = {**self.DEFAULT_SAVE_ARGS, **(save_args or {})}
self._credentials = credentials or {}
self._storage_options = storage_options or {}
self._storage_options.update(self._credentials)
self._metadata = metadata or {}
def _load(self) -> pl.DataFrame:
return pl.read_delta(
self._filepath, storage_options=self._storage_options, **self._load_args
)
def _save(self, data: pl.DataFrame) -> None:
data.write_delta(
self._filepath, storage_options=self._storage_options, **self._save_args
)
def _describe(self) -> dict[str, t.Any]:
return dict(
filepath=self._filepath,
load_args=self._load_args,
save_args=self._save_args,
storage_options=self._storage_options,
metadata=self._metadata,
)
It was noted today in backlog grooming by @noklam that fsspec
is linked to our versioning, therefore ditching it might be more complicated than expected and not worth the effort.
For the particular case of Delta format, let's
DeltaPolarsDataset
as I wrote in https://github.com/kedro-org/kedro-plugins/issues/625#issuecomment-2042634179From backlog grooming:
object_storage
is a bigger change because we need new mechanism to figure out what is the latest versionInteresting to learn about why Rust choose object store instead of a general filesystem interface.
@noklam: It is explained in the docs.
I did some experiments to see where we are in using plain vanilla read_*
, scan_*
and write_*
operations on object stores (for a local filesystem, they work as expected).
read_parquet
, scan_parquet
work as expected. It is even possible to read parquet file generated by spark (e.g. a directory containing parquet files) using a glob pattern. If however, your dataset has Hive partitioning, e.g. .../year=2024/month=6/day=26/*
, you still have to use pyarrow... write_parquet
doesn't work directly unless you set use_pyarrow=True
. Otherwise, you have to use e.g. s3fs
to write to s3 (basically how we currently have it implemented).read_csv
is the only method that currently supports reading from S3. Both scan_csv
and write_csv
fail. So to write csv's, you'll have to fall back to our current implementation using s3fs
.So if we make any change, we could split up the logic based on the file_format
; if parquet, use native methods (unless you want to read data with Hive Partitioning) and use the current implementation otherwise.
Any thoughts?
So if we make any change, we could split up the logic based on the
file_format
; if parquet, use native methods (unless you want to read data with Hive Partitioning) and use the current implementation otherwise.Any thoughts?
At this point we should start by documenting what the "social contract" of Kedro datasets is. In other words, how war we go in filling gaps like the ones you describe.
Somewhat related: https://github.com/kedro-org/kedro/issues/1936
Description
Rewrite the polars datasets: https://github.com/kedro-org/kedro-plugins/tree/main/kedro-datasets/kedro_datasets/polars to not rely on
fsspec
, because they don't need it.Context
Polars can read remote systems just fine thanks to https://docs.rs/object_store/latest/object_store/, but the kedro-datasets version asks me for fsspec dependencies anyway:
Related: #590
Your Environment
Include as many relevant details about the environment in which you experienced the bug:
pip show kedro
orkedro -V
):pip show kedro-airflow
):python -V
):