AI-secure / DecodingTrust

A Comprehensive Assessment of Trustworthiness in GPT Models
https://decodingtrust.github.io/
Creative Commons Attribution Share Alike 4.0 International
242 stars 52 forks source link

When should we set `example_prefix` to be True? And what is the difference between put ICL examples into system prompt versus multi-turn user-assistant chat? #49

Open peter-peng-w opened 5 months ago

peter-peng-w commented 5 months ago

I understand that example_prefix is used during ICL so that we can put examples into the system prompt, especially when we are using GPT-4. However, I have several questions regarding this feature:

  1. What is the difference between putting ICL examples into system prompts versus putting them into multi-turn conversations between user and assistant after system prompt?
  2. Is it recommend only for GPT-4 or both GPT-4 and GPT-3.5? I found there are some discussion regarding the difference between GPT-4 and GPT-3.5 when they are taking system prompt with different name attributes, what is the suggestion from the authors regarding this issue?
  3. If my understanding is correct, when using open source models such as Llama-2, we shouldn't set example_prefix to be true as this will cause some issues with the prompt. To be more concrete, when calling the HFChat class in the src/dt/chat.py at line 395, if we are using example_prefix, we will construct multiple system prompts (2 for each ICL example, with name of example_user and example_assistant). However, at line 395, the implementation seems to overlap previous system prompt with the latest system prompt (i.e., conv.system) rather than concatenate multiple system prompts together, which will cause issue when loading multiple system prompts.

Please help me verify my understandings here. Thanks!

danielz02 commented 4 months ago

Thanks for reaching out! This understanding is largely correct. We will move away from using example_prefix when evaluating open-source models