This repository contains code and tooling for the Abacus.AI LLM Context Expansion project. Also included are evaluation scripts and benchmark tasks that evaluate a model’s information retrieval capabilities with context expansion. We also include key experimental results and instructions for reproducing and building on them.
Could you please clarify if the yellow line "train = 16, eval = 12 (Non IFT)" should be corrected to "train = 16, eval = 12 (IFT)"? As per my understanding, the training with 16 samples must undergo Instruction Finetuning (IFT). Could you please help me refine this issue?
Could you please clarify if the yellow line "train = 16, eval = 12 (Non IFT)" should be corrected to "train = 16, eval = 12 (IFT)"? As per my understanding, the training with 16 samples must undergo Instruction Finetuning (IFT). Could you please help me refine this issue?