Closed rohitnanda1443 closed 3 months ago
I agree, it used to work. Langchain must have changed some things to break it.
For me it just fails with handling the first step.
Yes that is correct and the reason it fails in the first step for me also is due to the response not received in the Json format expected (which is OpenAI format)
It probably works now if using vllm, but else only with openai.
d24272f9121eb4cfb5b0c89e3e1cdade49667796
Hi, trying to use the AutoGPT agent. The config is as under:
python generate.py --guest_name='' --base_model=mistralai/Mistral-7B-Instruct-v0.2 --max_seq_len=8094 --enable_tts=False --enable_stt=False --enable_transcriptions=False --use_gpu_id=False --inference_server="vllm:0.0.0.0:5002" &
Issue: AutoGPT unable to complete tasks as it goes into an enless loop. The reason is that the response it is getting from the local llm is not is the Json format it expected and hence gives an error and restarts the process.
How does one resolve this issue (get the respone in a correct Json format).