Open digitalw00t opened 2 months ago
Played around, got it to let me load the settings page, and I'm getting: perplexica-backend-1 | error: Error loading Ollama models: TypeError: fetch failed perplexica-backend-1 | error: Error loading Ollama embeddings: TypeError: fetch failed
When I use the pulldown selection for the models, they are empty. Ollama is already running on this host in a service and is working and has been tested. It can be accessed via localhost or remotely from the network by another machine.
I have the same issue with the same logs on a fresh install as well. Using Llama 3 in Ollama as a backend.
doesnt run with any kind of config the page never finishes loading
It seems it get stuck in the loading models stage, BTW I'm using Ollama expose in a separate server and check from the backend container and confirmed that it has access to ollama and searxng
Please review it.
Thanks in advance!
Can everyone paste their logs from the backend Docker container?
Played around, got it to let me load the settings page, and I'm getting: perplexica-backend-1 | error: Error loading Ollama models: TypeError: fetch failed perplexica-backend-1 | error: Error loading Ollama embeddings: TypeError: fetch failed
When I use the pulldown selection for the models, they are empty. Ollama is already running on this host in a service and is working and has been tested. It can be accessed via localhost or remotely from the network by another machine.
It suggests that the backend is not able to access Ollama's API. Make sure you're using the correct IP/Address in the config for Ollama's API URL
Same issue setting OpenAI
Played around, got it to let me load the settings page, and I'm getting: perplexica-backend-1 | error: Error loading Ollama models: TypeError: fetch failed perplexica-backend-1 | error: Error loading Ollama embeddings: TypeError: fetch failed
When I use the pulldown selection for the models, they are empty. Ollama is already running on this host in a service and is working and has been tested. It can be accessed via localhost or remotely from the network by another machine.
Localhost for the Docker container would be for its network and not from the host's network. That's why we make use of host.docker.internal thought its only available in Docker desktops, Linux users still face issues.
Even a fresh deployment to RepoCloud doesn't work.
Windows - docker-desktop
host ip address: 192.168.1.3
Fresh install and a fresh run.
Ollama running on host using ollama serve
Confirmed working using Open-webui running in docker - connects to ollama api with: http://192.168.1.3:11434
frontend ui has spinner
frontend settings does not show ollama
config.toml:
[API_KEYS]
OPENAI = ""
GROQ = ""
[API_ENDPOINTS]
OLLAMA = "http://192.168.1.3:11434"
SEARXNG = "http://localhost:32768"
[GENERAL]
PORT = 3001
SIMILARITY_MEASURE = "cosine"
backend logs:
2024-05-05 04:45:15 (node:28) [DEP0040] DeprecationWarning: The `punycode` module is deprecated. Please use a userland alternative instead.
2024-05-05 04:45:15 (Use `node --trace-deprecation ...` to show where the warning was created)
2024-05-05 04:45:15 yarn run v1.22.19
2024-05-05 04:45:15 $ node dist/app.js
2024-05-05 04:45:15 info: WebSocket server started on port 3001
2024-05-05 04:45:15 info: Server is running on port 3001
ollama is accessible from the container using the host IP:
(install curl in backend container: apk update; apk add curl
)
/home/perplexica # curl http://192.168.1.3:11434/
Ollama is running
Also tried using host.docker.internal
in config.toml and testing with curl. does the same thing.
The only activity the ollama log shows is the curl requests, no connection attempts from the backend otherwise.
Should the backend logs be showing anything from the frontend connection?
frontend console log:
69-9dd8c3df154f914b.js:1 No embedding models available
GET http://localhost:3000/library?_rsc=acgkz 404 (Not Found)
Seems like there are no models pulled by Ollama
I would recommend everyone to updating Perplexica to the latest version that I pushed today, it has a fix for a bug that might be causing issues here
The latest version works! Thanks a lot
I confirm that the last build is fine with Ollama running independently on Mac M2 Sonoma
Did a git pull, then docker compose up, I added curl to the back end, and it hits the ollama page saying it's running. I'm running the open-webui for ollama, pointing to the same end point, and I get a model list. I'm still having the same issue where it won't pull up the content of the pages, but is showing the frame. Even settings shows the window frame for it, but nothing inside it.
For me its running on my local windows but not on deployed Ubuntu VM
I am also having the same issue, also on the most recent pull.
Same issue, I'm also trying to run it in Kubernetes on my local network, seems backend isn't receiving any requests from the client or the frontend itself, no errors in backend or frontend. Here's the traffic flow in Hubble after a fresh deployment and accessing the UI (backend app is not present)
Errors from console
Logging/console errors aren't that descriptive, will look more into my setup next weekend, but any information would be useful
Same issue, I'm also trying to run it in Kubernetes on my local network, seems backend isn't receiving any requests from the client or the frontend itself, no errors in backend or frontend. Here's the traffic flow in Hubble after a fresh deployment and accessing the UI (backend app is not present)
Errors from console
Logging/console errors aren't that descriptive, will look more into my setup next weekend, but any information would be useful
The API URL seems invalid http://domain/undefined
it should be http://domain/api
If the backend says the following:
perplexica-backend-1 | error: Error loading Ollama models: TypeError: fetch failed
perplexica-backend-1 | error: Error loading Ollama embeddings: TypeError: fetch failed
It suggests that the backend is not able to connect to the API of Ollama. You can try changing the Ollama API URL to the following:
On Windows: http://host.docker.internal:11434
On Mac: http://host.docker.internal:11434
On Linux: http://private_ip_of_the_computer:11434
It's not just a ollama issue.. openAI too
Please provide more description
Please provide more description
I have added groq and openai api keys in the config.toml (no Olama) When I am composing the docker image in my computer (windows) it is working completely fine . (however i observed that it worked only when i selected groq from the ui settings)
I created an ubuntu VM (in gcp) and did all the standard procedures for setting up a docker container. It was deployed, and the searxng at port 4000 is working perfectly fine.
However at port 3000 the UI and the the settings bar shows loading forever
I tried deploying the project today on ubuntu, I used openai, but still got the same result. The page just keeps loading.
Unfortunately we do not provide support for the deployment of Peprlexica on some sort of VM or server. Our support is limited to its installation. All I can do is to tell you how you can fix this. You need to change the IP in the docker compose file to the IP of the server where you've deployed it, additionally make sure that the container is able to access Ollama (if you're using it)
If you are not able to see the model selector dropdown or the embedding model selector then update to the latest version. It has a patch for it.
maybe something im doing wrong in setup
You are accessing it from your network. It is intended to access it from localhost, we do not provide support to set it up for network. We provide support for its installation locally
For the people who are deploying on Docker container and hosting it in a VM but getting endless loading :
change the 127.0.0.1 to your external ip address
Similarly in searxng-settings.yaml change
bind_address: '127.0.0.1'
to your external ip address
If you get CORS error : open ui/components/ChatWindow.tsx and add cors header like this :
if (!chatModel || !chatModelProvider) { const chatModelProviders = await fetch(
${process.env.NEXT_PUBLIC_API_URL}/models,{ method: 'GET', **headers: { 'Access-Control-Allow-Origin': '*', },** } ).then(async (res) => (await res.json())['providers']);
For the people who are deploying on Docker container and hosting it in a VM but getting endless loading :
1. in Docker-compose.yaml `perplexica-frontend: build: context: . dockerfile: app.dockerfile args: - NEXT_PUBLIC_API_URL=http://127.0.0.1:3001/api - NEXT_PUBLIC_WS_URL=ws://127.0.0.1:3001`
change the 127.0.0.1 to your external ip address
2. Similarly in searxng-settings.yaml change ` bind_address: '127.0.0.1'` to your external ip address 3. If you get CORS error : open ui/components/ChatWindow.tsx and add cors header like this : `if (!chatModel || !chatModelProvider) { const chatModelProviders = await fetch(`${process.env.NEXT_PUBLIC_API_URL}/models`,{ method: 'GET', **headers: { 'Access-Control-Allow-Origin': '*', },** } ).then(async (res) => (await res.json())['providers']);`
Yeah, this was my issue, was setting the apis as part of the env definition for the frontend instead of defining them at build. A runtime configuration for the frontend would be great to resolve this sort of issue. One thing to note, if running encrypted, make sure to set the endpoints as https://
and wss://
I've handled the CORS issues at the reverse proxy
Yeah, this was my issue, was setting the apis as part of the env definition for the frontend instead of defining them at build. A runtime configuration for the frontend would be great to resolve this sort of issue. One thing to note, if running encrypted, make sure to set the endpoints as
https://
andwss://
I've handled the CORS issues at the reverse proxy
@alexanderccc hey, could you please share you config? like Nginx config and docker-compose.yaml
?
I also tried to config CORS, but it not work.
Here is my config of backend:
location ^~ / {
proxy_pass http://127.0.0.1:31338;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header REMOTE-HOST $remote_addr;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header X-Forwarded-Proto $scheme;
proxy_http_version 1.1;
add_header X-Cache $upstream_cache_status;
add_header Strict-Transport-Security "max-age=31536000";
# CORS
add_header 'Access-Control-Allow-Origin' '*';
add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS';
add_header 'Access-Control-Allow-Headers' 'DNT,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Range';
add_header 'Access-Control-Expose-Headers' 'Content-Length,Content-Range';
}
and docker-compose.yaml
:
services:
searxng:
image: docker.io/searxng/searxng:latest
volumes:
- ./searxng:/etc/searxng:rw
ports:
- 31336:8080
networks:
- perplexica-network
perplexica-backend:
build:
context: .
dockerfile: backend.dockerfile
args:
- SEARXNG_API_URL=http://searxng:8080
depends_on:
- searxng
ports:
- 31338:31338
networks:
- perplexica-network
perplexica-frontend:
build:
context: .
dockerfile: app.dockerfile
args:
- NEXT_PUBLIC_API_URL=https://apidomain.of.myself/api
- NEXT_PUBLIC_WS_URL=wss://apidomain.of.myself
- PORT=31337
depends_on:
- perplexica-backend
ports:
- 31337:31337
networks:
- perplexica-network
networks:
perplexica-network:
Both the backend and frontend utilize a reverse proxy.
Error log:
Failed to load resource: the server responded with a status of 404 () https://apidomain.of.myself/library?_rsc=acgkz
WebSocket connection to 'wss://apidomain.of.myself/?chatModel=GPT-3.5+turbo&chatModelProvider=openai&embeddingModel=Text+embedding+3+small&embeddingModelProvider=openai' failed: There was a bad response from the server.
Do I also need to replace the address/port of searXNG with an external HTTPS domain? If so, could you please tell me how to configure it? Thank you!
Do I also need to replace the address/port of searXNG with an external HTTPS domain? If so, could you please tell me how to configure it? Thank you!
Yes, in searxng-settings.yaml change bind_address: '127.0.0.1' to your external ip address
I didn't have to touch the searxng config, I don't think it needs to be exposed as the backend uses it and not the frontend, so, as long as in the backend config (config.toml) the endpoint for it is defined correctly, you should be good.
For CORS I'm using Kong as an ingress controller which has a built-in plugin for handling it.
In your config for nginx you need to handle the CORS preflight check as part of an OPTIONS call, this post from Stackoverflow might help you with that setup
https://stackoverflow.com/questions/45986631/how-to-enable-cors-in-nginx-proxy-server
Also, be aware on the APIs that you're exposing, I think you would need to do an exact match on / for the frontend and the backend would need to be exposed for wss and the /api path, but I might be wrong, to keep it simple I exposed them with 2 different hosts (perplexica and perplexica-be) both on / with Prefix
I didn't have to touch the searxng config, I don't think it needs to be exposed as the backend uses it and not the frontend, so, as long as in the backend config (config.toml) the endpoint for it is defined correctly, you should be good.
For CORS I'm using Kong as an ingress controller which has a built-in plugin for handling it.
In your config for nginx you need to handle the CORS preflight check as part of an OPTIONS call, this post from Stackoverflow might help you with that setup
https://stackoverflow.com/questions/45986631/how-to-enable-cors-in-nginx-proxy-server
Also, be aware on the APIs that you're exposing, I think you would need to do an exact match on / for the frontend and the backend would need to be exposed for wss and the /api path, but I might be wrong, to keep it simple I exposed them with 2 different hosts (perplexica and perplexica-be) both on / with Prefix
According to the solution you mentioned, I did not modify searXNG. Instead, I moved the CORS-related configuration from the reverse proxy to the server block, but it still didn't work. When I click on this wss://
link, my backend returns a 502 Bad Gateway page. I would like to ask if there are any other configurations that need to be changed? Thank you for your reply.
Hi everyone, yesterday there were some issues related to networking those are fixed now so a few of the issues should be resolved. We unfortunately do not provide support to connect Perplexica to a domain but rather install it so we cannot help you in this
Hi everyone, yesterday there were some issues related to networking those are fixed now so a few of the issues should be resolved. We unfortunately do not provide support to connect Perplexica to a domain but rather install it so we cannot help you in this
So I have no way to expose it on the public network and use it through a domain name, right?
I wouldn't say that there's no way, there is always a way to do something :sweat_smile:, but there is a difference between the project supporting it or not, I imagine @ItzCrazyKns wants to spend time actually developing the thing rather than providing support for running it remotely.
I for one have been able to run this on my home lab on kubernetes connected to ollama and I'm using it from my pc/laptop or phone, and honestly, it's been pretty awesome to run queries from my phone and test it out that way :grin:
Maybe we can close this off and open a separate issue to document the various setups for running it remotely?
I wouldn't say that there's no way, there is always a way to do something 😅, but there is a difference between the project supporting it or not, I imagine @ItzCrazyKns wants to spend time actually developing the thing rather than providing support for running it remotely.
I for one have been able to run this on my home lab on kubernetes connected to ollama and I'm using it from my pc/laptop or phone, and honestly, it's been pretty awesome to run queries from my phone and test it out that way 😁
Maybe we can close this off and open a separate issue to document the various setups for running it remotely?
I apologize for my inappropriate expression (and possibly disturbing emails). I did not intend to waste everyone's time; it's just that after genuinely trying and still failing, I felt quite discouraged and thus became a bit anxious. I think it would indeed be better to open a new issue.
Yes, I've created a new discussion here: https://github.com/ItzCrazyKns/Perplexica/discussions/111, here everyone can ask questions about deployment remotely and the community can provide support and help them overcome their issues, I myself would love to help them when I am free. @digitalw00t I hope your issue is resolved if not, please continue here otherwise I will mark this as closed in 24 hours.
Describe the bug I did a fresh docker build, got to localhost:3000, and the page never finishes loading. I get the icons on the left side, and a swirling circle in the middle that never finishes.
To Reproduce Same as in description
Expected behavior Page to load and be able to do a search.
This was from a docker compose up, so I would have assumed it was going to be a quick install and startup.