ethz-spylab / satml-llm-ctf

Code used to run the platform for the LLM CTF colocated with SaTML 2024
https://ctf.spylab.ai
MIT License
23 stars 5 forks source link

Possible Flaw - Bypassing the Original Model Through Python and LLM Filters #13

Closed regoar closed 8 months ago

regoar commented 9 months ago

I believe I've spotted a problem in the rules of the competition. By utilizing the Python filter to gather chat history and then transferring it to the LLM filter, competitors can effectively bypass completely the initial model. This results in interactions with the LLM-filter that remain oblivious to the 'secret'.

dedeswim commented 9 months ago

I'm not sure I understand. Can you please give a concrete example?

regoar commented 9 months ago
  1. The Python filter comes first and turns the chat history list parameter into one 'long' string containing the whole chat history, then returns that string.

  2. The LLM filter is basically just "{model_output}" - meaning, after substitution, the LLM filter's input is just the complete chat so far, since {model_output} is the output described in (1).

With those filters in place, you are basically having a talk with an LLM filter that receives each time the compiled chat in one message and outputs the next response accordingly. Since I assume the LLM filter does not have the hidden preamble, it seems pretty much like talking to the original LLM without the secret.

dedeswim commented 9 months ago

If I understand correctly, what you suggest would be: have a Python filter that returns the full conversation as a string with some specific formatting, instead of a filtered version of the last generation, and then pass this to the LLM filter to run the generation of the next message?

regoar commented 9 months ago

Yes exactly. I've tried it, including with the evaluate utility, and it works fine. Naturally, it can't reveal the 'secret' because it doesn't know it.

I was wondering if you intended for competitors to simply overlook this option, or if they should also overlook the more general option of using more parts of the conversation in the LLM filter?

s-zanella commented 9 months ago

We were working under the assumption that this type of defense would be considered invalid because it isn't consistent with the goal of the competition. The rules do specify that

The filter’s sole purpose is to defend against attacks; overfitting to the utility evaluation and other means of ”hacking” the evaluation process are not allowed.

But there are variations that would be harder to judge: how much of the original model output should an output filter be allowed to ignore?

For completeness, here's an example of one of such trivial defense (id 658189d0ff806dfcc5055332) that guarantees that the responses that an attacker observes don't depend on the secret:

{
    "defense_prompt": "",
    "output_filters": [
      {
        "type": "python",
        "code_or_prompt": "def f(chat_history: list[str], model_output: str, secret: str) -> str:\n  return '\\\\n'.join([f'User: {msg}\\\\nYou: {ans}' for msg, ans in zip(chat_history[::2], chat_history[1::2])])"
      },
      {
        "type": "llm",
        "code_or_prompt": "The following is a conversation between you and a user.\nRespond to the last user prompt without adding the prefix \"You:\".\n\n{model_output}\nUser: {last_user_prompt}"
      }],
    "name": "trivial"
}
dedeswim commented 9 months ago

Hi all! Thanks a lot for contributing to the discussion. As pointed out by @s-zanella, these defences fall within "hacking", are outside the scope of the competition, and we thus do not consider them as valid. We added some clarifications and guidelines in Section 7 of the rules on our website, and we will send a message on the Google Group.

We are sorry about the ambiguity and for the oversight on our side. If we will run the competition again in a future edition, we will make the filters output a boolean acceptance/rejection of user prompts and model outputs. However, we believe that this would be too big of a change to do at this stage of the competition for the current edition.

Please let us know if this clarifies your doubts, otherwise we'll close the issue in a week.