TheR1D / shell_gpt

A command-line productivity tool powered by AI large language models like GPT-4, will help you accomplish your tasks faster and more efficiently.
MIT License
8.84k stars 696 forks source link

Ollama integration and other backends #463

Closed TheR1D closed 4 months ago

TheR1D commented 5 months ago

Integrating multiple locally hosted LLMs using LiteLLM.

Test It

To test ShellGPT with ollama, follow these steps:

# Clone repository.
git clone https://github.com/TheR1D/shell_gpt.git
cd shell_gpt
# Change branch
git checkout ollama
# Create virtual environment
python -m venv venv
# Activate venv.
source venv/bin/activate
# Install dependencies
pip install -e  .

Ollama

[!NOTE] ShellGPT is not optimized for local models and may not work as expected.

Installation

MacOS

Download and launch Ollama app.

Linux & WSL2

curl https://ollama.ai/install.sh | sh

Setup

We can have multiple large language models installed in Ollama like Llama2, Mistral and others. It is recommended to use mistral:7b-instruct for the best results. To install the model, run the following command:

ollama pull mistral:7b-instruct

This will take some time to download the model and install it. Once the model is installed, you can start API server:

ollama serve

ShellGPT configuration

Now when we have Ollama backend running we need to configure ShellGPT to use it. Check if Ollama backend is running and accessible:

sgpt --model ollama/mistral:7b-instruct  "Who are you?"
# -> I'm ShellGPT, your OS and shell assistant...

If you are running ShellGPT for the first time, you will be prompted for OpenAI API key. Just press Enter to skip this step.

Now we need to change few settings in ~/.config/shell_gpt/.sgptrc. Open the file in your editor and change DEFAULT_MODEL to ollama/mistral:7b-instruct. Also make sure that OPENAI_USE_FUNCTIONS is set to false. And that's it! Now you can use ShellGPT with Ollama backend.

sgpt "Hello Ollama"

Azure

git clone https://github.com/TheR1D/shell_gpt.git
cd shell_gpt
git checkout ollama
python -m venv venv
source venv/bin/activate
pip install -e  .

export AZURE_API_KEY=YOUR_KEY
export AZURE_API_BASE=YOUR_API_BASE
export AZURE_API_VERSION=YOUR_API_VERSION
export AZURE_AD_TOKEN=YOUR_TOKEN # Optional
export AZURE_API_TYPE=YOUR_API_TYPE # Optional

sgpt --model azure/<your_deployment_name> --no-functions "Hi Azure"
# or
python -m sgpt --model azure/<your_deployment_name> --no-functions "Hi Azure"
tijszwinkels commented 5 months ago

Hi! - Let me begin by thanking you for this awesome project!

I'll test this more soon (also on WSL), but seems to work on Os X!

% sgpt --no-functions --model ollama/mixtral:latest  "Hi, Who are you?"
Hello! I'm ShellGPT, your programming and system administration assistant. I'm here to help you with any questions or tasks related to the Darwin/MacOS 14.2 operating system and the zsh shell. I aim to provide short and concise responses in about 100 words, using Markdown formatting when appropriate. If needed, I can store data from our conversation for future reference. How can I assist you today?

For now, two little remarks:

straight from source:

python app.py --model ollama/mistral:7b-instruct  "Who are you?"

Just install it outside of the venv

# Clone repository.
git clone https://github.com/TheR1D/shell_gpt.git
cd shell_gpt
# Change branch
git checkout ollama
# install
pip install --upgrade .
mbeds commented 5 months ago

Its asking for API key even when i press enter as im trying to setup ollama.

florzanetta commented 5 months ago

I tested this PR with MistralAI API and it fixes the issue I was having before where it was complaining about functions. Seems that Mistral doesn't support it. Thanks for your work! This is a great project.