abacusai / Long-Context

This repository contains code and tooling for the Abacus.AI LLM Context Expansion project. Also included are evaluation scripts and benchmark tasks that evaluate a model’s information retrieval capabilities with context expansion. We also include key experimental results and instructions for reproducing and building on them.
Apache License 2.0
577 stars 35 forks source link

About Train=16, eval = 12 (Non IFT) #9

Open zhongmz opened 6 months ago

zhongmz commented 6 months ago

image

Could you please clarify if the yellow line "train = 16, eval = 12 (Non IFT)" should be corrected to "train = 16, eval = 12 (IFT)"? As per my understanding, the training with 16 samples must undergo Instruction Finetuning (IFT). Could you please help me refine this issue?