TheR1D / shell_gpt

A command-line productivity tool powered by AI large language models like GPT-4, will help you accomplish your tasks faster and more efficiently.
MIT License
9.73k stars 767 forks source link

Error Messages with local Ollama Models #611

Open zuli12-dev opened 3 months ago

zuli12-dev commented 3 months ago

Hi,

first of all, MANY thanks for building this great app for us! I was able to get an local ollama installation working with sgpt, ccommands and everything are working fine. However there are some Issues i cannot get to find in the issues here on github and cannot solve/get rid of it:

When using a model like DEFAULT_MODEL=ollama/llama3.1:latest or gemma2:2b i get the following erros:

❯ sgpt -s "create a folder named test, place a file in that new folder calle hello.log" mkdir test && touch test/hello.log [2024-08-13T11:12:56Z ERROR cached_path::cache] ETAG fetch for https://huggingface.co/gemma2:2b/resolve/main/tokenizer.json failed with fatal error 13:12:56 - LiteLLM:WARNING: litellm_logging.py:1302 - Model=gemma2:2b not found in completion cost map. Setting 'response_cost' to None

I also tried with ollama/llama3.1 and got the same error messages. Just to be clear, everything seems to work fine, this is just an annoying message :) However i cannot find any reference to huggingface or the messages in the src code, so i wanted to share this here.

Some Examples:

sgpt -c "python function to calculate pi"

def calculate_pi(iterations):
  """Calculates an approximation of Pi using the Leibniz formula.

  Args:
    iterations: The number of iterations to perform.

  Returns:
    An approximation of Pi.
  """
  pi = 0
  for i in range(iterations):
    pi += 4 / (2 * i + 1)
  return pi * 4

[2024-08-13T11:22:24Z ERROR cached_path::cache] ETAG fetch for https://huggingface.co/gemma2:2b/resolve/main/tokenizer.json failed with fatal error 13:22:24 - LiteLLM:WARNING: litellm_logging.py:1302 - Model=gemma2:2b not found in completion cost map. Setting 'response_cost' to None

nerder commented 3 months ago

Seems to be an issue related with litellm now solved.

I've also add a weird error message about SSL and I fixed it using this SO response: https://stackoverflow.com/a/76187415

zuli12-dev commented 3 months ago

Thanks @nerder ! I do already have litellm installed on a never version:

❯ pip list | grep litellm
litellm                       1.43.9
❯ pip list | grep shell_gpt
shell_gpt                     1.4.4

But i now used the exact same version you changed in #616

Installing collected packages: litellm Attempting uninstall: litellm Found existing installation: litellm 1.43.9 Uninstalling litellm-1.43.9: Successfully uninstalled litellm-1.43.9 Successfully installed litellm-1.43.19

❯ pip list | grep litellm
litellm                       1.43.19
❯ sgpt -c "solve fizz buzz problem in python"
python
for i in range(1, 101):
  if i % 3 == 0 and i % 5 == 0:
    print("FizzBuzz")
  elif i % 3 == 0:
    print("Fizz")
  elif i % 5 == 0:
    print("Buzz")
  else:
    print(i)

❯ sgpt -s "command to get system statistics" zsh systemstat [2024-08-21T10:22:05Z ERROR cached_path::cache] ETAG fetch for https://huggingface.co/gemma2:2b/resolve/main/tokenizer.json failed with fatal error

[E]xecute, [D]escribe, [A]bort:12:22:05 - LiteLLM:WARNING: litellm_logging.py:1319 - Model=gemma2:2b not found in completion cost map. Setting 'response_cost' to None A

I am not sure if i want to replace the openssl lib. on my system though, as i am running this native and not in any docker..

❯ pip list | grep urllib3 urllib3 1.26.16 ❯ pip show urllib3 Name: urllib3 Version: 1.26.16 Summary: HTTP library with thread-safe connection pooling, file post, and more. Home-page: https://urllib3.readthedocs.io/ Author: Andrey Petrov Author-email: andrey.petrov@shazow.net License: MIT Location: /opt/homebrew/anaconda3/lib/python3.11/site-packages Requires: Required-by: anaconda-client, botocore, distributed, requests, responses

❯ openssl version -a OpenSSL 3.2.0 23 Nov 2023 (Library: OpenSSL 3.2.0 23 Nov 2023) built on: Thu Nov 23 13:20:19 2023 UTC platform: darwin64-arm64-cc options: bn(64,64) compiler: clang -fPIC -arch arm64 -O3 -Wall -DL_ENDIAN -DOPENSSL_PIC -D_REENTRANT -DOPENSSL_BUILDING_OPENSSL -DNDEBUG OPENSSLDIR: "/opt/homebrew/etc/openssl@3" ENGINESDIR: "/opt/homebrew/Cellar/openssl@3/3.2.0_1/lib/engines-3" MODULESDIR: "/opt/homebrew/Cellar/openssl@3/3.2.0_1/lib/ossl-modules" Seeding source: os-specific CPUINFO: OPENSSL_armcap=0x987d

❯ otool -L /opt/homebrew/bin/openssl /opt/homebrew/bin/openssl: /opt/homebrew/Cellar/openssl@3/3.2.0_1/lib/libssl.3.dylib (compatibility version 3.0.0, current version 3.0.0) /opt/homebrew/Cellar/openssl@3/3.2.0_1/lib/libcrypto.3.dylib (compatibility version 3.0.0, current version 3.0.0) /usr/lib/libSystem.B.dylib (compatibility version 1.0.0, current version 1336.0.0)

zuli12-dev commented 3 months ago

okay so the issue with the log message:

LiteLLM:WARNING: litellm_logging.py:1302 - Model=gemma2:2b not found in completion cost map. Setting 'response_cost' to None

comes from litelllm not having the exact models in their costmap here: https://github.com/BerriAI/litellm/blob/main/model_prices_and_context_window.json

I now have set:

DEFAULT_MODEL=ollama/llama3.1

and i only get the URL fatal message, as the link does not exist.

2024-08-21T11:06:26Z ERROR cached_path::cache] ETAG fetch for https://huggingface.co/llama3.1/resolve/main/tokenizer.json failed with fatal error

Not sure, but i guess this comes from the rust library cached_path https://crates.io/crates/cached-path

But not sure where this might be used in this project at all...