Open ksingh7 opened 1 year ago
@mudler helping hand
Hey @ksingh7 :wave:
what do you see in the console? I'd also suggest to set threads with the .env
file instead of replacing the command in the docker-compose file.
Same problem when running locally. Chatbot UI fails to display models
LocalAI: [127.0.0.1]:43394 200 - GET /v1/models [127.0.0.1]:43400 200 - GET /v1/models [127.0.0.1]:53952 200 - GET /v1/models
Chatbot UI: spinner where models selection dropdown should be
Same issue with the endless spinner, can't find models
The spinner will go away if one of the models is named gpt-3.5-turbo
. However, it's not possible to load more than one model.
I mentioned this at https://github.com/mckaywrigley/chatbot-ui/issues/770
I'm also unsuccessful with Chatbot-UI, I added all the .tmpl
files to make sure the UI detects gpt4all as gpt-3.5-turbo and it shows up when creating a new chat. I can also see that the UI's call to https://{{ chat }}/api/models
is successful.
But trying to talk to the bot returns nothing.
The API works.
curl https://{{ api }}/v1/chat/completions -H "Content-Type: application/json" -d '{
"model": "gpt-3.5-turbo",
"messages": [{"role": "user", "content": "How are you?"}],
"temperature": 0.9
}'
{"object":"chat.completion","model":"gpt-3.5-turbo","choices":[{"message":{"role":"assistant","content":"I am doing well. How about you?"}}],"usage":{"prompt_tokens":0,"completion_tokens":0,"total_tokens":0}}
EDIT: I used the new docker-compose provided by @mudler yesterday (thanks! ♥️) and it now works! I think the issue was with the files provided in the manual installation or an error on my side when I copied them to my directory.
+1
These files should be availble to make gpt-3.5-turbo models appeal in the model:
completion.tmpl gpt-3.5-turbo.yaml
ggml-gpt4all-j gpt4all.tmpl
Btw, do you know how to switch to another model in the same models folder? I have rwkv works with curl.
Hey Guys, love this project and willing to contribute to it. To learn more about the stuff, i need some help in getting the Chatbot UI to work
Following the example , here is my docker-compose.yaml
version: '3.6' services: api: image: quay.io/go-skynet/local-ai:latest restart: always build: context: . dockerfile: Dockerfile.dev ports: - 8080:8080 env_file: - .env volumes: - ./models:/models:cached command: ["/usr/bin/local-ai --threads 8" ] web-ui: image: ghcr.io/mckaywrigley/chatbot-ui:main restart: always ports: - 3000:3000 environment: - 'OPENAI_API_KEY=' - 'OPENAI_API_HOST=http://api:8080'
- The chatbot UI keeps on loading and throws message unable to find model
- I am exposing Chatbot UI over the internet
Can you pls guide me what should be the value of
OPEN_API_KEY and OPEN_API_HOST
in this case , i am sure something is wrong in my config.
OPEN_API_KEY and OPEN_API_HOST don't need to change if you use local model to inference. OPEN_API_KEY can be anything for local model.
that is because chatbot-ui doesn't define GGML_GPT4ALL_J in the enum OpenAIModelID.
i add these code into chatbot-ui and it worked.
that is because chatbot-ui doesn't define GGML_GPT4ALL_J in the enum OpenAIModelID.
i add these code into chatbot-ui and it worked.
What file is this under, I can't seem to find that line of code in the current repo
Hi! I'm a bot running with LocalAI ( a crazy experiment of @mudler ) - please beware that I might hallucinate sometimes!
_but.... I can also be funny or helpful :smilecat: and I can provide generally speaking good tips or places where to look after in the documentation or in the code based on what you wrote in the issue.
Don't engage in conversation with me, I don't support (yet) replying!
To get the chatbot UI to work, you need to update the OPENAI_API_KEY
and OPENAI_API_HOST
environment variables in the chatgpt
service's environment
section.
The OPENAI_API_KEY
should be set to your OpenAI API key, which you can obtain from the OpenAI website. The OPENAI_API_HOST
should be set to the IP address or FQDN of the externally managed LocalAI service, which you have already updated in the docker-compose
file as marked <<LOCALAI_IP>>
.
Here's an example of what your chatgpt
service's environment
section should look like:
chatgpt:
environment:
- 'OPENAI_API_KEY=sk-XXXXXXXXXXXXXXXXXXXX'
- 'OPENAI_API_HOST=http://<<LOCALAI_IP>>:8080'
Make sure to replace the <<LOCALAI_IP>>
placeholder with the IP address or FQDN of your externally managed LocalAI service.
Sources:
EDIT: I used the new docker-compose provided by @mudler yesterday (thanks! hearts) and it now works! I think the issue was with the files provided in the manual installation or an error on my side when I copied them to my directory.
Can you paste the "docker-compose provided by @mudler"?
I added a link in my original message for future reference.
Chatbot UI doesn't seem to be using custom models. I have {"object":"list","data":[{"id":"thebloke__wizardlm-13b-v1-0-uncensored-superhot-8k-ggml__wizardlm-13b-v1.0-superhot-8k.ggmlv3.q4_k_m.bin","object":"model"}]}
which I can query via terminal however chatbot ui does not display available models and tries to use gpt-3.5-turbo.
My docker-compose.yaml:
version: '3.6'
services:
api:
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: 1
capabilities: [gpu]
image: quay.io/go-skynet/local-ai:master-cublas-cuda12
tty: true # enable colorized logs
restart: always # should this be on-failure ?
ports:
- 8080:8080
env_file:
- .env
volumes:
- ./models:/models
command: ["/usr/bin/local-ai" ]
chatgpt:
depends_on:
api:
condition: service_healthy
image: ghcr.io/mckaywrigley/chatbot-ui:main
ports:
- 3000:3000
environment:
- 'OPENAI_API_KEY=sk-XXXXXXXXXXXXXXXXXXXX'
- 'OPENAI_API_HOST=http://api:8080'
Hey Guys, love this project and willing to contribute to it. To learn more about the stuff, i need some help in getting the Chatbot UI to work
Following the example , here is my docker-compose.yaml
The chatbot UI keeps on loading and throws message unable to find model
I am exposing Chatbot UI over the internet
Can you pls guide me what should be the value of
OPEN_API_KEY and OPEN_API_HOST
in this case , i am sure something is wrong in my config.