cmdh (short for Command Helper) is a tool that invokes LLM models provided by ollama or OpenAI to convert a command request into a desired command.
Use it to look up commands and flags that that you don't know offhand or generate complex commands with chaining.
git clone https://github.com/pgibler/cmdh.git && cd cmdh && ./install.sh
cmdh 'Output the number of lines of code committed to git last month'
NOTE: You will have to reload your .bashrc / .zshrc / etc. or open a new terminal to make the cmdh command available in the shell. In Debian / Ubuntu, this is done by running source ~/.bashrc
.
Before running cmdh, you will need to configure an LLM host and set configuration options required of that host.
cmdh configure
to start the configuration wizard. You will be asked to select an LLM host and input settings required by that host.cmdh configure show
to display your current configuration.cmdh configure
and select the OpenAI option.curl https://ollama.ai/install.sh | sh
ollama pull codellama
cmdh configure
, select the ollama option, and set 'codellama' as the model.git clone https://github.com/oobabooga/text-generation-webui
./start_linux --api --listen
.Trelis/Llama-2-7b-chat-hf-function-calling-v2
and press "Download".llama-2-7b-function-calling.Q3_K_M.gguf
. Then click "Load" to load the model.cmdh configure
and choose the 'text-generation-webui' option.cmdh will automatically send the prompts to whichever model is loaded by text-generation-webui.
HuggingFace model URL: https://huggingface.co/Trelis/Llama-2-7b-chat-hf-function-calling-v2
The issue tracker is mostly feature requests I have put in so I don't forget them. If you have any bug reports or good ideas, please include them in the tracker.
If you run into any issues installing or running cmdh, please open a ticket in the project tracker. Include a detailed bug report with stacktraces and inputs and the mode of operation (OpenAI or ollama).