CStanKonrad / long_llama

LongLLaMA is a large language model capable of handling long contexts. It is based on OpenLLaMA and fine-tuned with the Focused Transformer (FoT) method.
Apache License 2.0
1.45k stars 85 forks source link

Code for zero-shot arxiv evaluation #10

Open bronyayang opened 1 year ago

bronyayang commented 1 year ago

Hi,

Can you provide the code or more detail into how you zero-shot evaluate Arxiv dataset? I cannot get a good result when trying the arxiv summarization. I guess it is because I don't know the prompt or the model size is not 7B?

syzymon commented 1 year ago

Hi,

Thanks for interest in our work! In our paper, the only results we give on arxiv are language modeling perplexity numbers for small models. We do not evaluate LongLLaMA on arxiv summarization downstream task. Note that our model is not instruction tuned, which means that it cannot really do zero-shot summarization. You could try few-shot summarization (not quite sure if a 3B model could really do that), or prompt engineering to match the format of your target document. Also, please stay tuned for the upcoming instruction-tuned models which will definitely be able to do some summarization!