huggingface / nanotron

Minimalistic large language model 3D-parallelism training
Apache License 2.0
1.14k stars 107 forks source link

Adding Nanoset dataset #155

Closed TJ-Solergibert closed 4 months ago

TJ-Solergibert commented 5 months ago

Hi!

After all your comments in the previous PR, I decided to give one last (big) overhaul to the Nanosets, and I think now we can indeed call them Nanosets. In short, I completely got rid of the NanosetBuilder, NanosetConfig, BlendedNanoset, and MMapIndexedDataset. Also, I got rid of the different Nanosets for each train, valid, and test split (I've seen code snippets where you mention that you intend to support this, so when the time comes, I'll include it). Now there are ONLY the Nanosets, for which we ONLY have to specify the paths to the preprocessed datasets and, if we want, a weight for each one.

Basically, for each dataset_path specified in the configuration, we will read the total number of tokens and divide it by the sequence length to know the number of samples we can construct. If we have more than one dataset and do not specify the weight for each of them, we will consume them entirely for each epoch. Otherwise, we will build the Nanoset respecting these weights, consuming samples from the datasets again if necessary (i.e., if we need 1000 training samples and the weight of dataset 1 is 0.9, we will need 900 samples from this dataset. If dataset 1 only has 100 samples, we will consume it 9 times per epoch).

To determine from which dataset within the Nanoset and which sample to extract, we will build two indices: dataset_index, which will select the dataset from which to extract the sample, and dataset_sample_index, which will select the sample from the dataset. These two indices will be constructed using a helper function (build_nanoset_index_helper) that we will compile with Numba. This function is similar to the ones included in Megatron, which were compiled in C++, but in Python, for very large datasets, it took an enormous amount of time to run. Now with Numba, we solve this problem, keeping the entire project in Python. With these two indices, we will access the specific dataset and extract the sample with sequence length + 1 tokens.

I have updated the documentation by adding a detailed explanation of how they work, although I have left many comments in nanoset.py. I have also updated the tests, checking the following:

  1. That we build the same Nanoset in ALL processes.
  2. That dataset_index has been constructed according to the dataset_weights.
  3. That dataset_sample_index does not attempt to extract samples > len(dataset).

To install the necessary dependencies, I have created a new flavor of Nanotron, so it will be necessary to install it with pip install -e '.[nanosets]'. You can test them as follows with the config I have added:

python3 tools/preprocess_data.py \
       --input yelp_review_full \
       --split train \
       --output-prefix datasets/yelp_review_full \
       --tokenizer-name-or-path gpt2 \
       --num-workers 16
python3 tools/preprocess_data.py \
       --input HuggingFaceH4/testing_alpaca_small \
       --split train \
       --column completion \
       --output-prefix datasets/testing_alpaca_small \
       --tokenizer-name-or-path gpt2 \
       --num-workers 16

We launch the job with:

torchrun --nproc-per-node 4 run_train.py --config examples/config_nanoset.yaml

I have tested it with a setup with 4 GPUs.

As always, I expect your comments!

Toni.

TJ-Solergibert commented 4 months ago

Refractored preprocess_data.py script to use torchrun instead of multiprocessing. Now you can run:

torchrun --nproc-per-node 16 tools/preprocess_data.py \
       --input yelp_review_full \
       --split train \
       --output-prefix datasets/yelp_review_full \
       --tokenizer-name-or-path gpt2
torchrun --nproc-per-node 16 tools/preprocess_data.py \
       --input HuggingFaceH4/testing_alpaca_small \
       --split train \
       --column completion \
       --output-prefix datasets/testing_alpaca_small \
       --tokenizer-name-or-path gpt2
xrsrke commented 4 months ago

Hello. Thanks for the PR. Looking into the PR now