fastnlp / CPT

CPT: A Pre-Trained Unbalanced Transformer for Both Chinese Language Understanding and Generation
479 stars 70 forks source link

What are the data formats of `dataset` and `vocab` folder? #81

Open shivanraptor opened 1 month ago

shivanraptor commented 1 month ago

In the README of pre-training, it mentions that the dataset, vocab and roberta_zh have to be prepared before training.

Is there any example of the files in the dataset and vocab folder?

Also, what do you mean by "Place the checkpoint of Chinese RoBERTa"? I would like to train Chinese BART.

Last, if I wish to replace Jieba tokenizer with my custom tokenizer, how can I do so? Thanks.

choosewhatulike commented 3 weeks ago

Is there any example of the files in the dataset and vocab folder? Also, what do you mean by "Place the checkpoint of Chinese RoBERTa"? I would like to train Chinese BART.

Last, if I wish to replace Jieba tokenizer with my custom tokenizer, how can I do so? Thanks.

Jieba is used only for constructing whole-word masking, which does not affect the tokenizer of the model. If you want to replace it, you can either change the dictionary of Jieba by following this link, which helps Jieba recognize new words in your training data. Or you can use another tokenizer by changing the dataloader in the pre-training codebase.

shivanraptor commented 2 weeks ago

After I prepared the input data, I decided to pre-train with BART format. While running run_pretrain_bart.sh, it shows:

[rank0]: IndexError: Caught IndexError in DataLoader worker process 0.
[rank0]: Original Traceback (most recent call last):
[rank0]:   File "/home/jupyter-raptor/.local/lib/python3.10/site-packages/torch/utils/data/_utils/worker.py", line 308, in _worker_loop
[rank0]:     data = fetcher.fetch(index)  # type: ignore[possibly-undefined]
[rank0]:   File "/home/jupyter-raptor/.local/lib/python3.10/site-packages/torch/utils/data/_utils/fetch.py", line 51, in fetch
[rank0]:     data = [self.dataset[idx] for idx in possibly_batched_index]
[rank0]:   File "/home/jupyter-raptor/.local/lib/python3.10/site-packages/torch/utils/data/_utils/fetch.py", line 51, in <listcomp>
[rank0]:     data = [self.dataset[idx] for idx in possibly_batched_index]
[rank0]:   File "/home/jupyter-raptor/pretrain_tokenizer/megatron/data/blendable_dataset.py", line 83, in __getitem__
[rank0]:     return self.datasets[dataset_idx][sample_idx]
[rank0]:   File "/home/jupyter-raptor/pretrain_tokenizer/megatron/data/bart_dataset.py", line 106, in __getitem__
[rank0]:     return self.build_training_sample(sample, self.max_seq_length, np_rng)
[rank0]:   File "/home/jupyter-raptor/pretrain_tokenizer/megatron/data/bart_dataset.py", line 148, in build_training_sample
[rank0]:     source = self.add_whole_word_mask(source, mask_ratio, replace_length)
[rank0]:   File "/home/jupyter-raptor/pretrain_tokenizer/megatron/data/bart_dataset.py", line 360, in add_whole_word_mask
[rank0]:     source[indices[mask_random]] = torch.randint(
[rank0]: IndexError: The shape of the mask [2] at index 0 does not match the shape of the indexed tensor [1] at index 0

I have no clue how to solve it. Can you help?