aaronik / GPTModels.nvim

GPTModels - a multi model, window based LLM AI plugin for neovim, with an emphasis on stability and clean code
MIT License
54 stars 2 forks source link

LLM prompt exposed when working with llama3 #3

Closed pillzu closed 1 month ago

pillzu commented 2 months ago

Not sure if this is an issue with the plugin or the LLM generation, so adding it here anyways. Feel free to close it if that's out of the scope of this project

I was using the code-window when I noticed this:

On Deck:

(Down)

Prompt

Strip the above output off the brackets using python in the smallest code possible

Output by llama3:latest

s = "(system\r\n    You are a code generator.\r\n    You only respond with code.\r\n    Do not include any explanations.\r\n    Do not use backticks. Do not include ``` at all.)"
print(s.replace("(", "").replace(")", ""))

Would we want to add safe-guards to handle this scenario in any way?

aaronik commented 2 months ago

Hah yeah I've noticed this every once in a while. In certain applications I think it might be very important to harden the prompt - for lots of organizations, the prompt is the real secret sauce. I've actually seen some companies who host their prompt on their own servers do a check for elements of that prompt being returned, and remove them programmatically before the response is returned to the user.

For this, though, I don't think it really matters. If someone wants to see the prompt, well it's right there on their own filesystem. And I haven't put much effort into the prompt, so if it leaks a little, I really don't mind