pytorch / torchtitan

A native PyTorch Library for large model training
BSD 3-Clause "New" or "Revised" License
1.53k stars 138 forks source link

Loss curve spikes on amalagamated datasets - need full scale shuffler in dataloader #128

Open lessw2020 opened 6 months ago

lessw2020 commented 6 months ago

As part of e2e training, encountered wild loss curve spikes:

Screenshot 2024-03-07 at 8 40 55 PM

After additional hyperparam tuning and further investigation, the root cause is that we are reading the dataset sequentially, so to the model, it sees data type A...learns and improves, then hits data type B...suprised (spikes) but then learns and improves, repeat.

By training with a 'single data source' dataset, in this case openwebtext, we see a very nice loss curve on e2e training, showcasing that the issue is the lack of shuffling:

Screenshot 2024-03-12 at 9 50 57 AM
XinDongol commented 4 months ago

@tianyu-l @lessw2020 FYI, I am using this trick.

  hf_ds = HuggingFaceDataset(
      dataset_name, dataset_path, tokenizer, seq_len, world_size, rank, infinite
  )
  if shuffle:
      hf_ds._data = hf_ds._data.shuffle(seed=int(rank*10007+int(time.time())))
TJ-Solergibert commented 4 months ago

@XinDongol Why would you shuffle the dataset with that seed? Now that Stateful DataLoaders will merge soon, you won't be able to resume training from a crash properly because you don't know how you shuffled the dataset.

Random seeds are used to ensure that results are reproducible, in this case it's completely the opposite.

tianyu-l commented 3 months ago
  hf_ds = HuggingFaceDataset(
      dataset_name, dataset_path, tokenizer, seq_len, world_size, rank, infinite
  )
  if shuffle:
      hf_ds._data = hf_ds._data.shuffle(seed=int(rank*10007+int(time.time())))

@XinDongol For map-style dataset, this works as expected. However, for IterableDataset a buffer is used to create apply randomness within. The issue won't be fixed if the buffer size is not / cannot be large enough to cover different amalgamated datasets.

@TJ-Solergibert Checkpointing the random seeds used to shuffle the dataset would solve the problem. FYI it is on our roadmap.

TJ-Solergibert commented 3 months ago

Thanks for your answer @tianyu-l , it makes sense 😅

I was wondering, any idea to not use .skip() when resuming training? In my setup (& colab), skipping 10000000 samples took 90s approximately.

from datasets import load_dataset
ds = load_dataset("allenai/c4", name="en", split="train", streaming=True)
ds = ds.skip(10000000)
ds = iter(ds)
next(ds)
tianyu-l commented 3 months ago

I was wondering, any idea to not use .skip() when resuming training? In my setup (& colab), skipping 10000000 samples took 90s approximately.

@TJ-Solergibert

  1. We should use .skip() when resuming training. In fact, it has been put into #279.
  2. It doesn't mean this is the ideal solution. E.g., the C4 en section has more than 300M entries, which, according to your example, means over 45min to skip if we stop somewhere towards the end of the dataset. Ideally, even for streaming=True IterableDataset, skip should be able to directly seek the file position. As far as we know this is something HF is working on.