Open Cooperx521 opened 2 days ago
Congrats for the insightful paper!
text_inputs = get_input_ctx_multi(tokenizer=tokenizer, ctx_len=test_max_length, question=question, needles=needles) inputs = tokenizer(text_inputs, return_tensors="pt", return_token_type_ids=False).to(model.device) prompt_length = inputs.input_ids.size()[-1] sample = model.generate(**inputs, do_sample=False, max_new_tokens=max_new_tokens)
It seems that in the file test_niah_llama.py, no system message is added to the input before model.generate. I would like to ask whether this omission was intentional or if it is something that needs to be added manually.
Moreover, if I want to evaluate vision language models in NIAH, is it needed to add system message?
Best regards, Long
Congrats for the insightful paper!
It seems that in the file test_niah_llama.py, no system message is added to the input before model.generate. I would like to ask whether this omission was intentional or if it is something that needs to be added manually.