FranxYao / Long-Context-Data-Engineering

Implementation of paper Data Engineering for Scaling Language Models to 128K Context
414 stars 26 forks source link

When did you perform dynamic-NTK? #10

Open Liu-yuliang opened 5 months ago

Liu-yuliang commented 5 months ago

hi, i find you used dynamic NTK in llama-7b-80k, I'm curious about when you used it, before or after training phase? thank you for your reply

FranxYao commented 5 months ago

before training; also note that this approach eventually is equivalent to modifying the base of rope, and my take is as long as you make the base of rope longer than 128K, then continue train the model, you are good to go (i.e., it does not matter either you use linear/ NTK / whatever rope modifications)