hi, i'm curious about the experiment that togetherAI reproduced of paper: LOST in the MIDDLE. It seems that llama-2 and llama2-32K does not have the "U-shape performance" in the long-context QA. Did togetherAI use the same setting with LOST in the MIDDLE?
hi, i'm curious about the experiment that togetherAI reproduced of paper: LOST in the MIDDLE. It seems that llama-2 and llama2-32K does not have the "U-shape performance" in the long-context QA. Did togetherAI use the same setting with LOST in the MIDDLE?