meta-llama / llama-recipes

Scripts for fine-tuning Meta Llama3 with composable FSDP & PEFT methods to cover single/multi-node GPUs. Supports default & custom datasets for applications such as summarization and Q&A. Supporting a number of candid inference solutions such as HF TGI, VLLM for local or cloud deployment. Demo apps to showcase Meta Llama3 for WhatsApp & Messenger.
11.59k stars 1.64k forks source link

The EOS and BOS token setting when contine pretraining Llama3.1 #648

Open ShomyLiu opened 2 weeks ago

ShomyLiu commented 2 weeks ago

Hello, Thank you for providing these valuable recipes. I appreciate your work. I'm interested in further pre-training the Llama3.1-8B-base model rather than using the instruct version. To ensure I prepare my data correctly, I'd like some clarification on the tokenization process:

Could you please provide information about how the data should be tokenized? Specifically, I'm wondering whether the tokenized sequences should include the EOS and BOS tokens:

Thank you in advance for your assistance.

init27 commented 2 weeks ago

Hey @ShomyLiu!

Thanks for the question, you can find details on the right tokens here in the model card.

I would recommend upgrading your transformers version to the latest since that will also take care of the right token settings for you.

Please let me know if you run into any issues!

ShomyLiu commented 2 weeks ago

@init27 Thank you for your response. I've reviewed the information provided about the special tokens:

I understand that the EOS token is used during pretraining the base model. However, I'm unclear about the BOS token's usage, particularly in the pretraining phase. Since it's defined as "the start of the prompt," I'm wondering is the BOS token used during pretraining, or is it primarily for fine-tuning and inference?

So which one should I prepare my pretraining data:

Thank you for your time again