jzhang38 / TinyLlama

The TinyLlama project is an open endeavor to pretrain a 1.1B Llama model on 3 trillion tokens.
Apache License 2.0
7.3k stars 425 forks source link

For pretraining, does it inlcude Block Causal Attention and Block Diagonal Mask? #192

Open Leo-T-Zang opened 4 days ago

Leo-T-Zang commented 4 days ago

Hi,

Thanks for this amazing codebase!

I wonder during pretraining, if this codebase supports Block Causal Attention and Block Diagonal Mask to avoid crossing the bound of packed samples as LLaMA-3 does. If so, could you please kindly point it out to me.

Thanks a lot!