Dhruvgera / LocalAI-frontend

A front-end for selfhosted LLMs based on the LocalAI API
64 stars 18 forks source link

LocalAI with LocalAI-frontend? #7

Open scott-mackenzie opened 9 months ago

scott-mackenzie commented 9 months ago
Screenshot 2023-09-21 at 3 23 53 PM

The objective would be to get your project working as an overlay onto LocalAI running separately. I commented out the LocalAI in the docker-compose.yaml:

`❯ cat docker-compose.yaml version: '3.6'

services: frontend: build: context: . dockerfile: Dockerfile ports:

❯ netstat -an | grep LISTEN tcp46 0 0 .3000 .* LISTEN

The docker container is up and running as shown in the above image.

The API is running as an autonomous project separately and working independently. See below:

`❯ curl http://localhost:8080/v1/completions -H "Content-Type: application/json" -d '{ "model": "llama-2-7b-chat", "prompt": "What is the expected population of Ghana by the year 2100", "temperature": 0.7 }'

{"object":"text_completion","model":"llama-2-7b-chat","choices":[{"index":0,"finish_reason":"stop","text":"?\nlazarus May 3, 2022, 1:49pm #1\nThe population of Ghana is projected to continue growing in the coming decades. According to the United Nations Department of Economic and Social Affairs Population Division, Ghana’s population is expected to reach approximately 47 million by the year 2100. This represents a more than fivefold increase from the country’s estimated population of around 8.5 million in 2020.\nHowever, it is important to note that population projections are subject to uncertainty and can be influenced by various factors such as fertility rates, mortality rates, and migration patterns. Therefore, actual population growth may differ from projected values."}],"usage":{"prompt_tokens":0,"completion_tokens":0,"total_tokens":0}}%`

My question is how to get the "Select Model" and "Model Gallery" to effectively integrate with the LocalAI project when run separately and not directly integrated into your project? Is this possible?

I love the project concept of being able to change "model" and have "model galleries".

scott-mackenzie commented 9 months ago

Direct request to LocalAI for models returns the model list:

❯ curl http://localhost:8080/v1/models {"object":"list","data":[{"id":"Dollyv2-3B","object":"model"},{"id":"GPT4All-J-13B-Snoozy","object":"model"},{"id":"MPT-7B-Chat","object":"model"},{"id":"RedPajama-INCITE-Chat-3B","object":"model"},{"id":"ggml-gpt4all-j","object":"model"},{"id":"llama-2-13b-chat","object":"model"},{"id":"llama-2-7b-chat","object":"model"}]}%

scott-mackenzie commented 9 months ago

Also, noticed in console CORS errors:

`Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at http://localhost:8080/v1/models. (Reason: CORS header ‘Access-Control-Allow-Origin’ missing). Status code: 200.

Error: TypeError: NetworkError when attempting to fetch resource. ChatGptInterface.js:118:14 e ChatGptInterface.js:118 Babel 9 N ChatGptInterface.js:112 p ChatGptInterface.js:122 React 3 S scheduler.production.min.js:13 L scheduler.production.min.js:14 (Async: EventHandlerNonNull) 813 scheduler.production.min.js:14 Webpack`

s-KaiNet commented 9 months ago

I guess #5 is related.

Mist-Hunter commented 8 months ago

Just chiming in to say thank you for this project.

I've been chasing the trail of a working front-end for localai and thought this might rescue me (most webuis seem dead or only working in limited capacity), I was hoping this one, which looks very straight forward might be a good option being custom written for it, but I'm running into the same thing as everyone else, no model list.

I'm not entirely clear from looking at the docker file and the docker-compose.yml how this is supposed to get the model list. I know local ai has a model list @ /v1/modes but I don't see any querries headed there in my localai logs.

guilhermeprokisch commented 8 months ago

If you set the CORS option in the localai server side it will work.

put this on the .env

CORS settings

CORS=true CORS_ALLOW_ORIGINS=*