huggingface / datasets

πŸ€— The largest hub of ready-to-use datasets for ML models with fast, easy-to-use and efficient data manipulation tools
https://huggingface.co/docs/datasets
Apache License 2.0
19.01k stars 2.63k forks source link

Align the Dataset and IterableDataset processing API #3444

Open lhoestq opened 2 years ago

lhoestq commented 2 years ago

Intro

items marked like this are done already :)

Currently the two classes have two distinct API for processing:

The .map() method

Both have those parameters in common: function, batched, batch_size

The .shuffle() method

The .with_format() method

Other methods

Questions

I think it would be nice to be able to switch between streaming and regular dataset easily, without changing the processing code significantly.

  1. What should be aligned and what shouldn't between those two APIs ?

IMO the minimum is to align the main processing methods.

It would mean aligning breaking the current Iterable.map to have the same behavior as Dataset.map (add columns with dict.update), and add multiprocessing as well as the missing parameters. DONE βœ…

It would also mean implementing the missing methods: cast, cast_column, filter, rename_column, rename_columns, class_encode_column, flatten, prepare_for_task, train_test_split, shard. WIP 🟠

  1. What are the breaking changes for IterableDataset ?

The main breaking change would be the change of behavior of IterableDataset.map, because currently it discards all the previous columns instead of keeping them. DONE βœ…

  1. Shall we also do some changes for regular datasets ?

I agree the simplest would be to have the exact same methods for both Dataset and IterableDataset. However this is probably not a good idea because it would prevent users from using the best benefits of them. That's why we can keep some aspects of regular datasets as they are:

We could have a completely aligned map method if both methods were lazy by default, but this is a very big breaking change so I'm not sure we can consider doing that.

For information, TFDS does lazy map by default, and has an additional .cache() method.

Opinions ?

I'd love to gather some opinions about this here. If the two APIs are more aligned it would be awesome for the examples in transformers, and it would create a satisfactory experience for users that want to switch from one mode to the other.

cc @mariosasko @albertvillanova @thomwolf @patrickvonplaten @sgugger

thomwolf commented 2 years ago

Yes I agree, these should be as aligned as possible. Maybe we can also check the feedback in the survey at http://hf.co/oss-survey and see if people mentioned related things on the API (in particular if we go the breaking change way, it would be good to be sure we are taking the right direction for the community).

mariosasko commented 2 years ago

I like this proposal.

There is also an important difference in terms of behavior: Dataset.map adds new columns (with dict.update) BUT IterableDataset discards previous columns (it overwrites the dict) IMO the two methods should have the same behavior. This would be an important breaking change though.

The main breaking change would be the change of behavior of IterableDataset.map, because currently it discards all the previous columns instead of keeping them.

Yes, this behavior of IterableDataset.map was surprising to me the first time I used it because I was expecting the same behavior as Dataset.map, so I'm OK with the breaking change here.

IterableDataset only supports "torch" (it misses tf, jax, pandas, arrow) and is missing the parameters: columns, output_all_columns and format_kwargs

+ it's also missing the actual formatting code (we return unformatted tensors)

We could have a completely aligned map method if both methods were lazy by default, but this is a very big breaking change so I'm not sure we can consider doing that.

For information, TFDS does lazy map by default, and has an additional .cache() method.

If I understand this part correctly, the idea would be for Dataset.map to behave similarly to Dataset.with_transform (lazy processing) and to have an option to cache processed data (with .cache()). This idea is really nice because it can also be applied to IterableDataset to fix https://github.com/huggingface/datasets/issues/3142 (again we get the aligned APIs). However, this change would break a lot of things, so I'm still not sure if this is a step in the right direction (maybe it's OK for Datasets 2.0?)

If the two APIs are more aligned it would be awesome for the examples in transformers, and it would create a satisfactory experience for users that want to switch from one mode to the other.

Yes, it would be amazing to have an option to easily switch between these two modes.

I agree with the rest.

lhoestq commented 2 years ago

If I understand this part correctly, the idea would be for Dataset.map to behave similarly to Dataset.with_transform (lazy processing) and to have an option to cache processed data (with .cache()). This idea is really nice because it can also be applied to IterableDataset to fix #3142 (again we get the aligned APIs). However, this change would break a lot of things, so I'm still not sure if this is a step in the right direction (maybe it's OK for Datasets 2.0?)

Yea this is too big of a change in my opinion. Anyway it's fine as it is right now with streaming=lazy and regular=eager.

tshu-w commented 2 years ago

Hi, IterableDataset is also missing set_format.

lhoestq commented 2 years ago

Yes indeed, thanks. I added it to the list of methods to align in the first post

gcaillaut commented 2 years ago

I just encountered the problem of the missing fn_kwargs parameter in the map method. I am commenting to give a workaround in case someone has the same problem and does not find a solution. You can wrap your function call inside a class that contains the other parameters needed by the function called by map, like this:

def my_func(x, y, z):
    # Do things

class MyFuncWrapper:
    def __init__(self, y, z):
        self.y = y
        self.z = z

    def __call__(self, x):
        return my_func(x, self.y, self.z)

Then, give an instance of the MyFuncWrapper to the map function.

npuichigo commented 1 year ago

Any update on this? It's almost 2024πŸ˜‚ @lhoestq

lhoestq commented 1 year ago

The main differences have been addressed (map, formatting) but there are still a few things to implement like Dataset.take, Dataset.skip, IterableDataset.set_format, IterableDataset.formatted_as, IterableDataset.reset_format.

The rest cannot be implemented for the general case. E.g. train_test_split and select can only work on an iterable dataset if the underlying dataset format allows it (we need to know the number of rows and have some sort of random access)