HKUNLP / STRING

Data and code for our paper "Why Does the Effective Context Length of LLMs Fall Short?"
MIT License
64 stars 3 forks source link

Questions about system prompt in NIAH #5

Open Cooperx521 opened 6 days ago

Cooperx521 commented 6 days ago

Congrats for the insightful paper!

    text_inputs = get_input_ctx_multi(tokenizer=tokenizer, ctx_len=test_max_length, question=question, needles=needles)
    inputs = tokenizer(text_inputs, return_tensors="pt", return_token_type_ids=False).to(model.device)
    prompt_length = inputs.input_ids.size()[-1]
    sample = model.generate(**inputs, do_sample=False, max_new_tokens=max_new_tokens)

It seems that in the file test_niah_llama.py, no system message is added to the input before model.generate. I would like to ask whether this omission was intentional or if it is something that needs to be added manually.

Cooperx521 commented 4 days ago

Moreover, if I want to evaluate vision language models in NIAH, is it needed to add system message?

Best regards, Long

ChenxinAn-fdu commented 1 hour ago

Hi! In fact, we have the system prompt in this line.

In this paper, we test only the base models, which do not include the SFT stage. To test the Instruct models, it is necessary to include a system prompt based on their chat templates. For example, Llama3.1-Instruct:

<|begin_of_text|><|start_header_id|>system<|end_header_id|>

Cutting Knowledge Date: December 2023
Today Date: 23 July 2024

[YOUR SYSTEM RROMPT]<|eot_id|><|start_header_id|>user<|end_header_id|>