Closed GGyll closed 3 weeks ago
One way is to run the code in a sandboxed environment, so it does not interact with the host system. Another approach is to check the LLM content by another Model before executing it, to see if any malicious code is present or not.
I think checking the LLM content with another model could work, but it goes against the principle of "never fully trust the LLM" so it shouldnt be the only solution.
Open to other approaches as well!
@GGyll , as stated in the video comments, we can also use regex to detect if any python libraries are present, that may potentially run any system commands. Can I work on this issue?
Yes go ahead, this sounds good
Cool. Can you then assign this issue to me, so that I can get started
Hey @GGyll , I have opened a PR to solve this issue. Can you review it?
will review later this week
in
main.py
we are executing any python code returned by the LLM in exec(code), but if the LLM returns malicious code it can damage our system!See if there are any safeguards we can put out to prevent that.