h2oai / h2ogpt

Private chat with local GPT with document, images, video, etc. 100% private, Apache 2.0. Supports oLLaMa, Mixtral, llama.cpp, and more. Demo: https://gpt.h2o.ai/ https://gpt-docs.h2o.ai/
http://h2o.ai
Apache License 2.0
11.28k stars 1.24k forks source link

AutoGPT issue running on Local LLM #1669

Closed rohitnanda1443 closed 3 months ago

rohitnanda1443 commented 3 months ago

Hi, trying to use the AutoGPT agent. The config is as under:

  1. H2o-GPT running locally
  2. The LLM model is running locally / via an vLLM inference server.
  3. Correct parameters being passsed to h20-gpt:

python generate.py --guest_name='' --base_model=mistralai/Mistral-7B-Instruct-v0.2 --max_seq_len=8094 --enable_tts=False --enable_stt=False --enable_transcriptions=False --use_gpu_id=False --inference_server="vllm:0.0.0.0:5002" &

Issue: AutoGPT unable to complete tasks as it goes into an enless loop. The reason is that the response it is getting from the local llm is not is the Json format it expected and hence gives an error and restarts the process.

How does one resolve this issue (get the respone in a correct Json format).

pseudotensor commented 3 months ago

I agree, it used to work. Langchain must have changed some things to break it.

For me it just fails with handling the first step.

rohitnanda1443 commented 3 months ago

Yes that is correct and the reason it fails in the first step for me also is due to the response not received in the Json format expected (which is OpenAI format)

pseudotensor commented 3 months ago

It probably works now if using vllm, but else only with openai.

d24272f9121eb4cfb5b0c89e3e1cdade49667796