robusta-dev / holmesgpt

On-Call/DevOps Assistant - Get a head start on fixing alerts with AI investigation
MIT License
362 stars 31 forks source link

Bad environment variable in README #100

Open AIUser2324 opened 1 month ago

AIUser2324 commented 1 month ago

The instructions for using a self-hosted LLM in the README file say that you need to set the OPENAI_API_BASE variable. This should be OPENAI_BASE_URL to work properly.

aantn commented 1 month ago

Hey, which holmes version are you running and with which LLM model?

In the latest holmes version use LiteLLM under the hood, which uses OPENAI_API_BASE according to the docs.

AIUser2324 commented 1 month ago

I used the latest brew installation. holmes version gives me: HEAD -> master-fd086e5. I used the Llama3.1 model from Ollama.

aantn commented 1 month ago

Thanks, you're definitely on the latest version using LiteLLM.

What was the exact --model flag that you passed holmes?

AIUser2324 commented 1 month ago

When I set the environment variable with export OPENAI_API_BASE=<url-here>

and I then call holmes ask --model=llama3.1:8b-instruct-q8_0 "what pods are unhealthy and why?"

I get the following error: NotFoundError: Error code: 404 - {'error': {'message': 'The model `llama3.1:8b-instruct-q8_0` does not exist or you do not have access to it.', 'type': 'invalid_request_error', 'param': None, 'code': 'model_not_found'}}

When I go with --model=openai/llama3.1:8b-instruct-q8_0 I get: BadRequestError: Error code: 400 - {'error': {'message': 'invalid model ID', 'type': 'invalid_request_error', 'param': None, 'code': None}}

aantn commented 1 month ago

Got it, thanks. And to clarify, this works if you go with OPENAI_BASE_URL?

AIUser2324 commented 1 month ago

Yes, correct. If I use OPENAI_BASE_URL it works.

aantn commented 2 days ago

@AIUser2324, do either of the updated instructions for Ollama here work for you? https://github.com/robusta-dev/holmesgpt/pull/133/files#diff-b335630551682c19a781afebcf4d07bf978fb1f8ac04c6bf87428ed5106870f5

On my side, Holmes is able to connect in both cases, but I'm not getting good results. Perhaps that is because I'm not using the instruct model?

In any event, are you able to get decent results with either:

  1. The steps you mentioned in your original post
  2. The new instructions (linked in the PR)