ItzCrazyKns / Perplexica

Perplexica is an AI-powered search engine. It is an Open source alternative to Perplexity AI
MIT License
16.24k stars 1.52k forks source link

Failed to connect to the server. Please try again later. #357

Open papiche opened 2 months ago

papiche commented 2 months ago

Describe the bug

After actual code update

docker compose down
git pull
docker compose up -d --build

I cannot use perplexica.

Opening client displays "Failed to connect to the server. Please try again later."

Additional context

docker logs perplexica-perplexica-backend-1
yarn run v1.22.22
$ npm run db:push && node dist/app.js

> perplexica-backend@1.9.0-rc2 db:push
> drizzle-kit push sqlite

drizzle-kit: v0.22.7
drizzle-orm: v0.31.2

No config path provided, using default path
Reading config file '/home/perplexica/drizzle.config.ts'
[⣷] Pulling schema from database...
[✓] Pulling schema from database...

[i] No changes detected
npm notice
npm notice New patch version of npm available! 10.8.2 -> 10.8.3
npm notice Changelog: https://github.com/npm/cli/releases/tag/v10.8.3
npm notice To update run: npm install -g npm@10.8.3
npm notice
info: WebSocket server started on port 3001
(node:65) [DEP0040] DeprecationWarning: The `punycode` module is deprecated. Please use a userland alternative instead.
(Use `node --trace-deprecation ...` to show where the warning was created)
info: Server is running on port 3001
error: Error loading Ollama models: TypeError: fetch failed
error: Error loading Ollama embeddings model: TypeError: fetch failed
error: Error loading Ollama models: TypeError: fetch failed
error: Error loading Ollama embeddings model: TypeError: fetch failed
error: undefined
error: Error loading Ollama models: TypeError: fetch failed
error: Error loading Ollama embeddings model: TypeError: fetch failed
error: Error loading Ollama models: TypeError: fetch failed
error: Error loading Ollama embeddings model: TypeError: fetch failed
error: undefined
error: Error loading Ollama models: TypeError: fetch failed
error: Error loading Ollama embeddings model: TypeError: fetch failed
error: Error loading Ollama models: TypeError: fetch failed
error: Error loading Ollama embeddings model: TypeError: fetch failed
error: undefined
error: Error loading Ollama models: TypeError: fetch failed
error: Error loading Ollama embeddings model: TypeError: fetch failed
error: Error loading Ollama models: TypeError: fetch failed
error: Error loading Ollama embeddings model: TypeError: fetch failed
error: Error loading Ollama models: TypeError: fetch failed
error: Error loading Ollama embeddings model: TypeError: fetch failed
error: undefined
docker-compose.yaml 
services:
  searxng:
    image: docker.io/searxng/searxng:latest
    volumes:
      - ./searxng:/etc/searxng:rw
    ports:
      - 4000:8080
    networks:
      - perplexica-network
    restart: unless-stopped

  perplexica-backend:
    build:
      context: .
      dockerfile: backend.dockerfile
      args:
        - SEARXNG_API_URL=http://searxng:8080
    depends_on:
      - searxng
    ports:
      - 3001:3001
    volumes:
      - backend-dbstore:/home/perplexica/data
      - ./config.toml:/home/perplexica/config.toml
    extra_hosts:
      - 'host.docker.internal:host-gateway'
    networks:
      - perplexica-network
    restart: unless-stopped

  perplexica-frontend:
    build:
      context: .
      dockerfile: app.dockerfile
      args:
        - NEXT_PUBLIC_API_URL=http://127.0.0.1:3001/api
        - NEXT_PUBLIC_WS_URL=ws://127.0.0.1:3001
    depends_on:
      - perplexica-backend
    ports:
      - 3000:3000
    networks:
      - perplexica-network
    restart: unless-stopped

networks:
  perplexica-network:

volumes:
  backend-dbstore:
ItzCrazyKns commented 2 months ago

What Ollama address are you using? And what operating system are you on

papiche commented 2 months ago

OK, my fault... config.toml

[API_ENDPOINTS]
OLLAMA = "http://host.docker.internal:11434"

As ollama runs with systemd,

OLLAMA = "http://127.0.0.1:11434"

fix it

papiche commented 2 months ago

In fact. not. docker logs perplexica-perplexica-backend-1 still complains

error: Error loading Ollama models: TypeError: fetch failed
error: Error loading Ollama embeddings model: TypeError: fetch failed
error: Error loading Ollama models: TypeError: fetch failed
error: Error loading Ollama embeddings model: TypeError: fetch failed
error: undefined
error: Error loading Ollama models: TypeError: fetch failed
error: Error loading Ollama embeddings model: TypeError: fetch failed
error: Error loading Ollama models: TypeError: fetch failed
error: Error loading Ollama embeddings model: TypeError: fetch failed
error: undefined
error: Error loading Ollama models: TypeError: fetch failed
error: Error loading Ollama embeddings model: TypeError: fetch failed
error: Error loading Ollama models: TypeError: fetch failed
error: Error loading Ollama embeddings model: TypeError: fetch failed
error: undefined
error: Error loading Ollama models: TypeError: fetch failed
error: Error loading Ollama embeddings model: TypeError: fetch failed
error: Error loading Ollama models: TypeError: fetch failed
error: Error loading Ollama embeddings model: TypeError: fetch failed

ollama --version ollama version is 0.3.9

on Linux Mint 22 uname -a Linux sagittarius 6.8.0-41-generic 41-Ubuntu SMP PREEMPT_DYNAMIC Fri Aug 2 20:41:06 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux

papiche commented 2 months ago

OK.

It happens when I updated ollama, it reset systemd and removed ENVIRONMENT parameters Environment="OLLAMA_HOST=0.0.0.0"

papiche commented 2 months ago

Thought it was that, but still not ;(

nfo: Server is running on port 3001
error: undefined

fail

is there a way to get more logs from server ? still investigating...

ItzCrazyKns commented 2 months ago

Just change the LLM and embedding provider in the settings to Ollama that will fix your issue, if not just reopen it

papiche commented 2 months ago

This is my settings. Settings

What surprise me is that I only have "open_ai" to choose from Chat model Provider

ItzCrazyKns commented 2 months ago

I suppose you are running Perplexica on docker and localhost for docker container will be for its network and not host's network so you need to use the private ip of the computer + port as the Ollama API URL

papiche commented 2 months ago

I am using Perplexica on a LAN GPU equiped computer. I used

  perplexica-frontend:
    build:
      context: .
      dockerfile: app.dockerfile
      args:
        - NEXT_PUBLIC_API_URL=http://127.0.0.1:3001/api
        - NEXT_PUBLIC_WS_URL=ws://127.0.0.1:3001

as docker_compose parameters

then I use it on my host through "ssh tunnels"

echo "Perplexica"
ssh -fN -L 5000:127.0.0.1:3000 frd@gpu.local
echo "Perplexica API"
ssh -fN -L 3001:127.0.0.1:3001 frd@gpu.local

Maybe it could lead to websocket communication errors?

I should try to publish "LAN IP" for docker build ?

papiche commented 2 months ago

Settongs2

I succeed connecting to ollama with this parameters (OpenAI custom parameters)

But i get no answers in frontend

Still the same backend errors

error: Error loading Ollama models: TypeError: fetch failed
error: Error loading Ollama embeddings model: TypeError: fetch failed
error: Error loading Ollama models: TypeError: fetch failed
error: Error loading Ollama embeddings model: TypeError: fetch failed
ksingh-scogo commented 2 months ago

@ItzCrazyKns with the latest version of perplexica , getting the same error with OpenAI Models , so its has nothing to do with Ollama or Docker Networking

ItzCrazyKns commented 2 months ago

Settongs2

I succeed connecting to ollama with this parameters (OpenAI custom parameters)

But i get no answers in frontend

Still the same backend errors

error: Error loading Ollama models: TypeError: fetch failed
error: Error loading Ollama embeddings model: TypeError: fetch failed
error: Error loading Ollama models: TypeError: fetch failed
error: Error loading Ollama embeddings model: TypeError: fetch failed

Because it doesn't actually connects, you need to use your private IP+ ports. Doesn't matter what provider you use, locakhost for the container will be for its networking and not yours.

ItzCrazyKns commented 2 months ago

As i am forwarding 3000 & 30001 ports on my machine, either 127.0.0.1 or localhost should work but i could break some "ws" connexions, as browser log indicate.

err1 load2

As Perplexica is running on IP 192.168.1.33 in my LAN I should try to change docker compose like this

  perplexica-frontend:
    build:
      context: .
      dockerfile: app.dockerfile
      args:
        - NEXT_PUBLIC_API_URL=http://192.168.1.33:3001/api
        - NEXT_PUBLIC_WS_URL=ws://192.168.1.33:3001

and access to http://192.168.1.33:3000

NB: As there is no login "ssh reverse tunnels" were used to access Perplexica from remote location over the Internet.

Yes but only access it in the private IP. If you tunnel and try to access it from outside the network it won't work at all. Also you need to use private IP + port for Ollama and serve Ollama over the network.

papiche commented 2 months ago

Using LAN address (which is 192.168.1.27 5000/3001) in docker-compose.yaml, accessing without ssh tunnel ollama

IT WORKS

ItzCrazyKns commented 2 months ago

Make sure Ollama is accessible in via your LAN

papiche commented 2 months ago

Is there any way to access Perplexica from WAN ?

ItzCrazyKns commented 2 months ago

There are many ways but we don't provide support for that.

papiche commented 2 months ago

OK. At least, it needs "websocket" relay for port 3001 do you plan to add user access control ?

ItzCrazyKns commented 2 months ago

There already exists a branch called admin-password which has password for confidential things like the settings. Consider closing this issue if your problem has been resolved

papiche commented 2 months ago

I'll try this other branch. Thx And so many thanks to you and the wonderful FOSS you made! You help AI to become a "common good"

papiche commented 1 month ago

It also happens to me regularly. Just wait in front of the prompt and after a while "Failed to connect to server" appears Capture d’écran du 2024-09-25 11-57-39

In console, i can see a 404 when accessing https://perplexica.at.home/discover?_rsc=acgkz

In Network, last request is a "protocol switch" 101

GET /?chatModel=llama3.1%3Alatest&chatModelProvider=ollama&embeddingModel=llama3.1%3Alatest&embeddingModelProvider=ollama HTTP/1.1
Host: perplexicapi.at.home
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:130.0) Gecko/20100101 Firefox/130.0
Accept: */*
Accept-Language: fr,fr-FR;q=0.8,en-US;q=0.5,en;q=0.3
Accept-Encoding: gzip, deflate, br, zstd
Sec-WebSocket-Version: 13
Origin: https://perplexica.at.home
Sec-WebSocket-Extensions: permessage-deflate
Sec-WebSocket-Key: l7/fyMHT/H8/0UOi4P2++w==
DNT: 1
Connection: keep-alive, Upgrade
Cookie: rl_session=RudderEncrypt%3AU2FsdGVkX18t7%2FREcOWO4PvwpX5dATkMGtZgxbaRUhywwx6hilB5AUcKpjschuZcjQjiDtyhx9FHiAjGLlrmnREuStYEwFd67XBQoV%2Fj%2BfHUKHFB4S4bvjD5waYBdyNliNv4CdD1KKOOGDnV4h0Wrg%3D%3D; rl_anonymous_id=RudderEncrypt%3AU2FsdGVkX1%2Bo7zdetmFDmGks%2B8QHRPrUL1K8pjsznAVrmNLtL6APBFOx7dtgDZU%2B4WUXlTHFg3tB5koWdWntLg%3D%3D; rl_page_init_referrer=RudderEncrypt%3AU2FsdGVkX19AlkgDwJi6wDXlFUzSPwh8NgQ8xgxPmCLyxM84t40a%2FuKqukjcPyK6; rl_page_init_referring_domain=RudderEncrypt%3AU2FsdGVkX1%2FFeJkEUcSmC48SDEszq%2FmELgtN5pT92rigRloFmG%2BEvRbXCSmx3Rz9; ph_phc_4URIAm1uYfJO7j8kWSe0J8lc8IqnstRLS7Jx8NcakHo_posthog=%7B%22distinct_id%22%3A%22f85477d4364bf27e078eae4790bed442db07a0421c6557dd2883173d195810a0%23af4ec013-fe45-4cad-8f20-d5ed0525a7ed%22%2C%22%24sesid%22%3A%5B1726536524590%2C%220191fd73-55df-7044-9555-1fd813bc64bf%22%2C1726534079967%5D%2C%22%24epp%22%3Atrue%7D
Sec-Fetch-Dest: empty
Sec-Fetch-Mode: websocket
Sec-Fetch-Site: same-site
Pragma: no-cache
Cache-Control: no-cache
Upgrade: websocket

In case it helps investigating the problem

ItzCrazyKns commented 1 month ago

It also happens to me regularly. Just wait in front of the prompt and after a while "Failed to connect to server" appears Capture d’écran du 2024-09-25 11-57-39

In console, i can see a 404 when accessing https://perplexica.at.home/discover?_rsc=acgkz

In Network, last request is a "protocol switch" 101

GET /?chatModel=llama3.1%3Alatest&chatModelProvider=ollama&embeddingModel=llama3.1%3Alatest&embeddingModelProvider=ollama HTTP/1.1
Host: perplexicapi.at.home
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:130.0) Gecko/20100101 Firefox/130.0
Accept: */*
Accept-Language: fr,fr-FR;q=0.8,en-US;q=0.5,en;q=0.3
Accept-Encoding: gzip, deflate, br, zstd
Sec-WebSocket-Version: 13
Origin: https://perplexica.at.home
Sec-WebSocket-Extensions: permessage-deflate
Sec-WebSocket-Key: l7/fyMHT/H8/0UOi4P2++w==
DNT: 1
Connection: keep-alive, Upgrade
Cookie: rl_session=RudderEncrypt%3AU2FsdGVkX18t7%2FREcOWO4PvwpX5dATkMGtZgxbaRUhywwx6hilB5AUcKpjschuZcjQjiDtyhx9FHiAjGLlrmnREuStYEwFd67XBQoV%2Fj%2BfHUKHFB4S4bvjD5waYBdyNliNv4CdD1KKOOGDnV4h0Wrg%3D%3D; rl_anonymous_id=RudderEncrypt%3AU2FsdGVkX1%2Bo7zdetmFDmGks%2B8QHRPrUL1K8pjsznAVrmNLtL6APBFOx7dtgDZU%2B4WUXlTHFg3tB5koWdWntLg%3D%3D; rl_page_init_referrer=RudderEncrypt%3AU2FsdGVkX19AlkgDwJi6wDXlFUzSPwh8NgQ8xgxPmCLyxM84t40a%2FuKqukjcPyK6; rl_page_init_referring_domain=RudderEncrypt%3AU2FsdGVkX1%2FFeJkEUcSmC48SDEszq%2FmELgtN5pT92rigRloFmG%2BEvRbXCSmx3Rz9; ph_phc_4URIAm1uYfJO7j8kWSe0J8lc8IqnstRLS7Jx8NcakHo_posthog=%7B%22distinct_id%22%3A%22f85477d4364bf27e078eae4790bed442db07a0421c6557dd2883173d195810a0%23af4ec013-fe45-4cad-8f20-d5ed0525a7ed%22%2C%22%24sesid%22%3A%5B1726536524590%2C%220191fd73-55df-7044-9555-1fd813bc64bf%22%2C1726534079967%5D%2C%22%24epp%22%3Atrue%7D
Sec-Fetch-Dest: empty
Sec-Fetch-Mode: websocket
Sec-Fetch-Site: same-site
Pragma: no-cache
Cache-Control: no-cache
Upgrade: websocket

In case it helps investigating the problem

Just refresh the page and see if that works, I generally don't provide support for LAN hosted or Network hosted versions because most of the times its just your network.

papiche commented 1 month ago

you are right, it could be network quality, and it can happen in LAN (wifi with room mates condition) so maybe having some retry instead of direct timeout could help. from what part of the code this error is raised ? I could have a look

adhulipa commented 1 month ago

fwiw, I ran into a similar error and what fixed it for me was changing the base image of node that runs from within backend.dockerfile. Essentially changing it to node from node:slim resolved my issue

coderyiyang commented 1 month ago

fwiw, I ran into a similar error and what fixed it for me was changing the base image of node that runs from within backend.dockerfile. Essentially changing it to node from node:slim resolved my issue

same error, specify, please~

Chris2000SP commented 1 week ago

I read the whole thread and had tried to fix it.

I got it working after setting everything on LAN device IP with ollama port and net.ipv4.ip_forward=1 in sysctl and docker-compose.yaml 3 times.

I forgot i had a Firewall active, so i made a rule for ollama be allowed to listen to network. Now its working.

If i made it somewhere wrong, you can inform me but i am glad it is working now because i was the whole day working on it.

papiche commented 1 day ago

fwiw, I ran into a similar error and what fixed it for me was changing the base image of node that runs from within backend.dockerfile. Essentially changing it to node from node:slim resolved my issue

Do you mean by changing "backend.dockerfile" into

FROM node:18

WORKDIR /home/perplexica

COPY src /home/perplexica/src
COPY tsconfig.json /home/perplexica/
COPY drizzle.config.ts /home/perplexica/
COPY package.json /home/perplexica/
COPY yarn.lock /home/perplexica/

RUN mkdir /home/perplexica/data

RUN yarn install --frozen-lockfile --network-timeout 600000
RUN yarn build

CMD ["yarn", "start"]

Could you confirm it solves "Failed to connect to server" ??


In latest code version, when this issue is fired up, my browser console indicates : [DEBUG] closed

while network failed loading

https://perplexica.mydomain.tld/?_rsc=acgkz
https://perplexica.mydomain.tld/discover?_rsc=acgkz

It also happens during "video" playback with the same [DEBUG] closed