Dhruvgera / LocalAI-frontend

A front-end for selfhosted LLMs based on the LocalAI API
66 stars 19 forks source link

webui select model list is empty #5

Open Dev-Wiki opened 1 year ago

Dev-Wiki commented 1 year ago

docker compose install localai and web ui:

version: '3.6'

services:
  api:
    image: quay.io/go-skynet/local-ai:latest
    ports:
      - 8080:8080
    env_file:
      - .env
    volumes:
      - ./models:/models:cached
    command: ["/usr/bin/local-ai" ]

  frontend:
    image: quay.io/go-skynet/localai-frontend:master
    ports:
      - 3000:3000

build result:

$ docker-compose up -d --pull always
[+] Running 2/2
 ✔ frontend Pulled                                                                                                 3.2s
 ✔ api Pulled                                                                                                      3.2s
[+] Building 0.0s (0/0)
[+] Running 2/0
 ✔ Container localai-frontend-1  Running                                                                           0.0s
 ✔ Container localai-api-1       Running                                                                           0.0s

$ curl http://localhost:8080/v1/models
{"object":"list","data":[{"id":"ggml-gpt4all-j","object":"model"}]}

but,the webui: image

weselben commented 1 year ago

I think i found the issue, (or maybe its a new one :smile: ) image

Cross-Origin Resource Sharing error: MissingAllowOriginHeader

blablazzz commented 1 year ago

same for me. Model list is empty even though i have models.

mate1213 commented 10 months ago

Hi! I've setted the .env parameters in the .env file, and added: REACT_APP_API_HOST=172.16.1.94:8081

Which points to the backend. Sadly I still have the same issue.... And I don't understand why does it not work.

partisansb commented 9 months ago

Same problem here... Inspecting the container files in Docker Desktop, I can see that the model files are loaded in the models folder in the api container. I inspected the .js file in the frontend container but I couldn't make sense of it. I did find a reference to to a folder /v1/models and i changed that to just /models, but it didn't change anything. I haven't seen any responses to any issues in this repo so I'm not holding out for anything soon,,, :(

partisansb commented 9 months ago

When I run $npm start, the browser opens and the interface pops up... Still no Models... I edited the line : const host = process.env.REACT_APP_API_HOST; to read : const host = "http://127.0.0.1:8080"; in ChatGptInterface.js

In the console I have this message :

`Compiled successfully!

You can now view chat-gpt-interface in the browser.

  Local:            http://localhost:3000
  On Your Network:  http://192.168.0.13:3000

Note that the development build is not optimized.
To create a production build, use npm run build.

webpack compiled successfully`

However $./local-ai is running on http://127.0.0.1:8080 │ │ (bound on host 0.0.0.0 and port 8080) So I don't think they can see each other... How do I resolve this?

I had the same issue when running docker-compose... No modules in the dropdown list, even though inspecting the api container files I could see the models in the model folder... Any ideas?

weselben commented 9 months ago

I believe the issue can be resolved by including the "MissingAllowOriginHeader" parameter in the header of the model request. However, I'm unsure about the implementation process for this in the project. Perhaps someone else could test and confirm its effectiveness.

Edit: What about this #7?

Dhruvgera commented 9 months ago

I believe the issue can be resolved by including the "MissingAllowOriginHeader" parameter in the header of the model request. However, I'm unsure about the implementation process for this in the project. Perhaps someone else could test and confirm its effectiveness.

Edit: What about this #7?

Thanks for bringing that to my notice, I'll test and merge it if it works without issues!

LeGrandMonoss commented 6 months ago

I believe the issue can be resolved by including the "MissingAllowOriginHeader" parameter in the header of the model request. However, I'm unsure about the implementation process for this in the project. Perhaps someone else could test and confirm its effectiveness. Edit: What about this #7?

Thanks for bringing that to my notice, I'll test and merge it if it works without issues!

Did it work ? I have the same problem as you, but their solution didn't work...