princeton-nlp / LLM-Shearing

[ICLR 2024] Sheared LLaMA: Accelerating Language Model Pre-training via Structured Pruning
https://arxiv.org/abs/2310.06694
MIT License
562 stars 47 forks source link

Why the rope params are ignored while converting hf checkpoint to composer checkpoint? #66

Open ZhiYuanZeng opened 8 months ago

ZhiYuanZeng commented 8 months ago

I found that the rope params are ignored in composer_to_hf.py and that the base of rope in composer_llama.py is set to be 10000 constantly. However, it is normal to tune the base of rope for the better long-context performance. Therefore, we need to set the rope params (inv_freq) in composer_to_hf.py?

zhangzhenyu13 commented 8 months ago

The rope is infact not trained and is fixed registerd buffer tensors. It is ok to apply the default settings of ROPE without any modifications.

ZhiYuanZeng commented 8 months ago

Yes, the rope is parameter-free, but the base of rope is often tuned to support long-context extrapolation. The base of ComposerMosaicLlama is fixed to be 10000. This configuration works well for the standard LLama model, but it might not be correct for variants of LLama.

ZhiYuanZeng commented 8 months ago

But It is better to set the rope base from the config file, rather than loading from checkpoint.

I found that the rope params are ignored in composer_to_hf.py and that the base of rope in composer_llama.py is set to be 10000 constantly. However, it is normal to tune the base of rope for the better long-context performance. Therefore, we need to set the rope params (inv_freq) in composer_to_hf.py?