code-yeongyu / AiShell

AiShell: A Natural Language Shell like GitHub Copilot X, Powered by ChatGPT
MIT License
151 stars 14 forks source link

Implement Safety Measures for Dangerous Commands in `AiShell` #2

Closed code-yeongyu closed 1 year ago

code-yeongyu commented 1 year ago

It would be great if the system could implement some safety measures for commands that could potentially cause harm, such as:

The idea is that, before executing any dangerous command, AiShell could ask the user for confirmation. This way, accidental execution of harmful commands can be prevented.

code-yeongyu commented 1 year ago

considering now to ask everytime Y/N prompt to execute the command or not, because it's much easier way to implement.