CStanKonrad / long_llama

LongLLaMA is a large language model capable of handling long contexts. It is based on OpenLLaMA and fine-tuned with the Focused Transformer (FoT) method.
Apache License 2.0
1.45k stars 85 forks source link

Finetuning code? #1

Open StrangeTcy opened 1 year ago

StrangeTcy commented 1 year ago

That sounds massively interesting, and while we try to run inference and read the paper, should we expect the release of the finetuning code?

syzymon commented 1 year ago

Hi, thanks for interest in our work! That's right, we are currently supporting only inference. We are considering releasing examples for finetuning of our models in pytorch/huggingface API.

memray commented 1 year ago

@syzymon Is there any plan of releasing the training pipeline (is it based on the EasyLM library)? Thank you!

SUSTechBruce commented 1 year ago

Hope to see your finetune code ASAP, since your work is very interesting!!!!

syzymon commented 1 year ago

The continued pretraining pipeline (used to train long_llama_3b base model) is based on EasyLM.

We are planning to release instruction tuning code in pytorch & checkpoints & examples early next week. Stay tuned!

puddleglum56 commented 1 year ago

Will you also be releasing pretraining code? Since the contrastive training seems to be a very important element of your great results, it would be nice if we could try recreating it

syzymon commented 1 year ago

We are working on LongLLaMA v2, which will be a bigger release. After that we will release the pretraining code which is in JAX, based on EasyLM codebase - same as used for openllama pretraining. You can expect the instruction finetuning code in pytorch to be out very soon (basically next week). There are no plans to implement FoT pretraining in PyTorch on our side, as our compute is based on TPUs. Stay tuned for LongLLaMA v2 which will definitely be out there in August!

syzymon commented 1 year ago

In case you haven't seen, the instruction code is already there! see https://twitter.com/s_tworkowski/status/1687620785379360768 and READMEs in this repo for more details