camel-ai / camel

🐫 CAMEL: Finding the Scaling Law of Agents. A multi-agent framework. https://www.camel-ai.org
https://www.camel-ai.org
Apache License 2.0
4.94k stars 601 forks source link

[Feature Request] Unsafe code execution support #331

Closed dandansamax closed 6 months ago

dandansamax commented 8 months ago

Required prerequisites

Motivation

Current implementation only allows execute safe python codes. To give user more flexibility, we should add unsafe code execution mode.

Solution

Use python subprocess package to run commands.

Alternatives

No response

Additional context

No response

47h4rv4-b commented 8 months ago

@dandansamax what does safe python mode mean?

I wanted to execute LLM generated code and save the produced results, (P.S. does this mean unsafe?), so any idea how to do this instead of just copy pasting?

dandansamax commented 8 months ago

Hi @atharvaBaste. Thank you so much for helping us improve this project.

I wanted to execute LLM generated code and save the produced results, (P.S. does this mean unsafe?)

Yes, it's definitely unsafe if we don't apply any restriction. We know that current LLMs have hallucination and may generate incorrect codes. Running code and encountering errors are not a big problem. However, if the agent generates malicious codes and we execute the codes without asking users, it can be extremely harmful. It's possible to leak user privacy, delete files, or even crash the system.

what does safe python mode mean?

In the current safe execution mode, the python interpreter analyses generated python codes by ast module and creates a abstract syntax tree. Then it executes the syntax tree line by line in the running python environment (same env as the main agent). This design has several advantages:

However, this solution is too complicated for both usage and maintenance and may not be useful.

so any idea how to do this instead of just copy pasting?

Yes, this issue is trying to solve this. I'm working on executing the generated code through subprocess module, so that Embodied Agent is able to execute any programming language (Python, Shell Script, C, ...). For safety, the executor will ask the user to check the code before executing it. But the executor itself does not ensure security, so we call it "unsafe mode".

Please feel free to ask me if you have any futher question or suggestion.

47h4rv4-b commented 8 months ago

hi @dandansamax ,

Thanks for letting know the framework better.

So subprocess is like parallel programming right? We try to save the main process while we may risk the new process. Also we can try sandboxing .

I had a question though - So in Autogen i didn't encounter any such restriction to execute also it generated and executed my prompts automatically, so any idea how does Autogen ensure this malicious non execution or has it accepted this risk afayk??

dandansamax commented 8 months ago

The sandboxing idea is good, we should try to implement it.