Closed pillzu closed 1 month ago
Hah yeah I've noticed this every once in a while. In certain applications I think it might be very important to harden the prompt - for lots of organizations, the prompt is the real secret sauce. I've actually seen some companies who host their prompt on their own servers do a check for elements of that prompt being returned, and remove them programmatically before the response is returned to the user.
For this, though, I don't think it really matters. If someone wants to see the prompt, well it's right there on their own filesystem. And I haven't put much effort into the prompt, so if it leaks a little, I really don't mind
Not sure if this is an issue with the plugin or the LLM generation, so adding it here anyways. Feel free to close it if that's out of the scope of this project
I was using the code-window when I noticed this:
On Deck:
Prompt
Output by
llama3:latest
Would we want to add safe-guards to handle this scenario in any way?