Open lessw2020 opened 6 months ago
@tianyu-l @lessw2020 FYI, I am using this trick.
hf_ds = HuggingFaceDataset(
dataset_name, dataset_path, tokenizer, seq_len, world_size, rank, infinite
)
if shuffle:
hf_ds._data = hf_ds._data.shuffle(seed=int(rank*10007+int(time.time())))
@XinDongol Why would you shuffle the dataset with that seed? Now that Stateful DataLoaders will merge soon, you won't be able to resume training from a crash properly because you don't know how you shuffled the dataset.
Random seeds are used to ensure that results are reproducible, in this case it's completely the opposite.
hf_ds = HuggingFaceDataset( dataset_name, dataset_path, tokenizer, seq_len, world_size, rank, infinite ) if shuffle: hf_ds._data = hf_ds._data.shuffle(seed=int(rank*10007+int(time.time())))
@XinDongol For map-style dataset, this works as expected. However, for IterableDataset
a buffer is used to create apply randomness within. The issue won't be fixed if the buffer size is not / cannot be large enough to cover different amalgamated datasets.
@TJ-Solergibert Checkpointing the random seeds used to shuffle the dataset would solve the problem. FYI it is on our roadmap.
Thanks for your answer @tianyu-l , it makes sense 😅
I was wondering, any idea to not use .skip()
when resuming training? In my setup (& colab), skipping 10000000 samples took 90s approximately.
from datasets import load_dataset
ds = load_dataset("allenai/c4", name="en", split="train", streaming=True)
ds = ds.skip(10000000)
ds = iter(ds)
next(ds)
I was wondering, any idea to not use
.skip()
when resuming training? In my setup (& colab), skipping 10000000 samples took 90s approximately.
@TJ-Solergibert
.skip()
when resuming training. In fact, it has been put into #279.en
section has more than 300M entries, which, according to your example, means over 45min to skip if we stop somewhere towards the end of the dataset. Ideally, even for streaming=True
IterableDataset, skip
should be able to directly seek the file position. As far as we know this is something HF is working on.
As part of e2e training, encountered wild loss curve spikes:
After additional hyperparam tuning and further investigation, the root cause is that we are reading the dataset sequentially, so to the model, it sees data type A...learns and improves, then hits data type B...suprised (spikes) but then learns and improves, repeat.
By training with a 'single data source' dataset, in this case openwebtext, we see a very nice loss curve on e2e training, showcasing that the issue is the lack of shuffling: