ItzCrazyKns / Perplexica

Perplexica is an AI-powered search engine. It is an Open source alternative to Perplexity AI
MIT License
10.97k stars 972 forks source link

fresh build, page loads but working icon in the middle never finishes #62

Open digitalw00t opened 2 months ago

digitalw00t commented 2 months ago

Describe the bug I did a fresh docker build, got to localhost:3000, and the page never finishes loading. I get the icons on the left side, and a swirling circle in the middle that never finishes.

To Reproduce Same as in description

Expected behavior Page to load and be able to do a search.

This was from a docker compose up, so I would have assumed it was going to be a quick install and startup.

digitalw00t commented 2 months ago

Played around, got it to let me load the settings page, and I'm getting: perplexica-backend-1 | error: Error loading Ollama models: TypeError: fetch failed perplexica-backend-1 | error: Error loading Ollama embeddings: TypeError: fetch failed

When I use the pulldown selection for the models, they are empty. Ollama is already running on this host in a service and is working and has been tested. It can be accessed via localhost or remotely from the network by another machine.

BarthPaleologue commented 2 months ago

I have the same issue with the same logs on a fresh install as well. Using Llama 3 in Ollama as a backend.

DaslavCl commented 2 months ago

doesnt run with any kind of config the page never finishes loading

apulache commented 2 months ago

It seems it get stuck in the loading models stage, BTW I'm using Ollama expose in a separate server and check from the backend container and confirmed that it has access to ollama and searxng

Screenshot from 2024-05-04 20-09-16

Please review it.

Thanks in advance!

ItzCrazyKns commented 2 months ago

Can everyone paste their logs from the backend Docker container?

ItzCrazyKns commented 2 months ago

Played around, got it to let me load the settings page, and I'm getting: perplexica-backend-1 | error: Error loading Ollama models: TypeError: fetch failed perplexica-backend-1 | error: Error loading Ollama embeddings: TypeError: fetch failed

When I use the pulldown selection for the models, they are empty. Ollama is already running on this host in a service and is working and has been tested. It can be accessed via localhost or remotely from the network by another machine.

It suggests that the backend is not able to access Ollama's API. Make sure you're using the correct IP/Address in the config for Ollama's API URL

DaslavCl commented 2 months ago

Same issue setting OpenAI

ItzCrazyKns commented 2 months ago

Played around, got it to let me load the settings page, and I'm getting: perplexica-backend-1 | error: Error loading Ollama models: TypeError: fetch failed perplexica-backend-1 | error: Error loading Ollama embeddings: TypeError: fetch failed

When I use the pulldown selection for the models, they are empty. Ollama is already running on this host in a service and is working and has been tested. It can be accessed via localhost or remotely from the network by another machine.

Localhost for the Docker container would be for its network and not from the host's network. That's why we make use of host.docker.internal thought its only available in Docker desktops, Linux users still face issues.

cpedia commented 2 months ago

Even a fresh deployment to RepoCloud doesn't work.

rmensing commented 2 months ago

Windows - docker-desktop host ip address: 192.168.1.3 Fresh install and a fresh run. Ollama running on host using ollama serve Confirmed working using Open-webui running in docker - connects to ollama api with: http://192.168.1.3:11434 frontend ui has spinner frontend settings does not show ollama

config.toml:

[API_KEYS]
OPENAI = ""
GROQ = ""

[API_ENDPOINTS]
OLLAMA = "http://192.168.1.3:11434"
SEARXNG = "http://localhost:32768"

[GENERAL]
PORT = 3001
SIMILARITY_MEASURE = "cosine"

backend logs:

2024-05-05 04:45:15 (node:28) [DEP0040] DeprecationWarning: The `punycode` module is deprecated. Please use a userland alternative instead.
2024-05-05 04:45:15 (Use `node --trace-deprecation ...` to show where the warning was created)
2024-05-05 04:45:15 yarn run v1.22.19
2024-05-05 04:45:15 $ node dist/app.js
2024-05-05 04:45:15 info: WebSocket server started on port 3001
2024-05-05 04:45:15 info: Server is running on port 3001

ollama is accessible from the container using the host IP: (install curl in backend container: apk update; apk add curl)

/home/perplexica # curl http://192.168.1.3:11434/
Ollama is running

Also tried using host.docker.internal in config.toml and testing with curl. does the same thing.

The only activity the ollama log shows is the curl requests, no connection attempts from the backend otherwise.

Should the backend logs be showing anything from the frontend connection?

frontend console log:

69-9dd8c3df154f914b.js:1 No embedding models available
GET http://localhost:3000/library?_rsc=acgkz 404 (Not Found)
ItzCrazyKns commented 2 months ago

Seems like there are no models pulled by Ollama

ItzCrazyKns commented 2 months ago

I would recommend everyone to updating Perplexica to the latest version that I pushed today, it has a fix for a bug that might be causing issues here

BarthPaleologue commented 2 months ago

The latest version works! Thanks a lot

delfireinoso commented 2 months ago

I confirm that the last build is fine with Ollama running independently on Mac M2 Sonoma

digitalw00t commented 2 months ago

Did a git pull, then docker compose up, I added curl to the back end, and it hits the ollama page saying it's running. I'm running the open-webui for ollama, pointing to the same end point, and I get a model list. I'm still having the same issue where it won't pull up the content of the pages, but is showing the frame. Even settings shows the window frame for it, but nothing inside it.

NityaSG commented 2 months ago

For me its running on my local windows but not on deployed Ubuntu VM

gladly-hyphenated-21 commented 2 months ago

I am also having the same issue, also on the most recent pull.

alexanderccc commented 2 months ago

Same issue, I'm also trying to run it in Kubernetes on my local network, seems backend isn't receiving any requests from the client or the frontend itself, no errors in backend or frontend. Here's the traffic flow in Hubble after a fresh deployment and accessing the UI (backend app is not present)

Screenshot from 2024-05-05 21-17-19

Errors from console

Screenshot from 2024-05-05 21-16-21

Logging/console errors aren't that descriptive, will look more into my setup next weekend, but any information would be useful

ItzCrazyKns commented 2 months ago

Same issue, I'm also trying to run it in Kubernetes on my local network, seems backend isn't receiving any requests from the client or the frontend itself, no errors in backend or frontend. Here's the traffic flow in Hubble after a fresh deployment and accessing the UI (backend app is not present)

Screenshot from 2024-05-05 21-17-19

Errors from console

Screenshot from 2024-05-05 21-16-21

Logging/console errors aren't that descriptive, will look more into my setup next weekend, but any information would be useful

The API URL seems invalid http://domain/undefined it should be http://domain/api

ItzCrazyKns commented 2 months ago

If the backend says the following:

perplexica-backend-1 | error: Error loading Ollama models: TypeError: fetch failed
perplexica-backend-1 | error: Error loading Ollama embeddings: TypeError: fetch failed

It suggests that the backend is not able to connect to the API of Ollama. You can try changing the Ollama API URL to the following:

On Windows: http://host.docker.internal:11434 On Mac: http://host.docker.internal:11434 On Linux: http://private_ip_of_the_computer:11434

DaslavCl commented 2 months ago

It's not just a ollama issue.. openAI too

ItzCrazyKns commented 2 months ago

Please provide more description

NityaSG commented 2 months ago

Please provide more description

I have added groq and openai api keys in the config.toml (no Olama) When I am composing the docker image in my computer (windows) it is working completely fine . (however i observed that it worked only when i selected groq from the ui settings)

I created an ubuntu VM (in gcp) and did all the standard procedures for setting up a docker container. It was deployed, and the searxng at port 4000 is working perfectly fine.

However at port 3000 the UI and the the settings bar shows loading forever Screenshot 2024-05-06 101510

Screenshot 2024-05-06 101259

Yeqishen commented 2 months ago

I tried deploying the project today on ubuntu, I used openai, but still got the same result. The page just keeps loading.

ItzCrazyKns commented 2 months ago

Unfortunately we do not provide support for the deployment of Peprlexica on some sort of VM or server. Our support is limited to its installation. All I can do is to tell you how you can fix this. You need to change the IP in the docker compose file to the IP of the server where you've deployed it, additionally make sure that the container is able to access Ollama (if you're using it)

ItzCrazyKns commented 2 months ago

If you are not able to see the model selector dropdown or the embedding model selector then update to the latest version. It has a patch for it.

NityaSG commented 2 months ago
Screenshot 2024-05-06 161939

maybe something im doing wrong in setup

ItzCrazyKns commented 2 months ago

You are accessing it from your network. It is intended to access it from localhost, we do not provide support to set it up for network. We provide support for its installation locally

NityaSG commented 2 months ago

For the people who are deploying on Docker container and hosting it in a VM but getting endless loading :

  1. in Docker-compose.yaml `perplexica-frontend: build: context: . dockerfile: app.dockerfile args:

change the 127.0.0.1 to your external ip address

  1. Similarly in searxng-settings.yaml change bind_address: '127.0.0.1' to your external ip address

  2. If you get CORS error : open ui/components/ChatWindow.tsx and add cors header like this : if (!chatModel || !chatModelProvider) { const chatModelProviders = await fetch( ${process.env.NEXT_PUBLIC_API_URL}/models,{ method: 'GET', **headers: { 'Access-Control-Allow-Origin': '*', },** } ).then(async (res) => (await res.json())['providers']);

alexanderccc commented 1 month ago

For the people who are deploying on Docker container and hosting it in a VM but getting endless loading :

1. in Docker-compose.yaml
   `perplexica-frontend: build: context: . dockerfile: app.dockerfile args: - NEXT_PUBLIC_API_URL=http://127.0.0.1:3001/api - NEXT_PUBLIC_WS_URL=ws://127.0.0.1:3001`

change the 127.0.0.1 to your external ip address

2. Similarly in searxng-settings.yaml  change
   ` bind_address: '127.0.0.1'` to your external ip address

3. If you get CORS error : open ui/components/ChatWindow.tsx and add cors header like this :
   `if (!chatModel || !chatModelProvider) { const chatModelProviders = await fetch(`${process.env.NEXT_PUBLIC_API_URL}/models`,{ method: 'GET', **headers: { 'Access-Control-Allow-Origin': '*', },** } ).then(async (res) => (await res.json())['providers']);`

Yeah, this was my issue, was setting the apis as part of the env definition for the frontend instead of defining them at build. A runtime configuration for the frontend would be great to resolve this sort of issue. One thing to note, if running encrypted, make sure to set the endpoints as https:// and wss:// I've handled the CORS issues at the reverse proxy

zhuozhiyongde commented 1 month ago

Yeah, this was my issue, was setting the apis as part of the env definition for the frontend instead of defining them at build. A runtime configuration for the frontend would be great to resolve this sort of issue. One thing to note, if running encrypted, make sure to set the endpoints as https:// and wss:// I've handled the CORS issues at the reverse proxy

@alexanderccc hey, could you please share you config? like Nginx config and docker-compose.yaml?

I also tried to config CORS, but it not work.

Here is my config of backend:

location ^~ / {
    proxy_pass http://127.0.0.1:31338; 
    proxy_set_header Host $host; 
    proxy_set_header X-Real-IP $remote_addr; 
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; 
    proxy_set_header REMOTE-HOST $remote_addr; 
    proxy_set_header Upgrade $http_upgrade; 
    proxy_set_header Connection "upgrade"; 
    proxy_set_header X-Forwarded-Proto $scheme; 
    proxy_http_version 1.1; 
    add_header X-Cache $upstream_cache_status; 
    add_header Strict-Transport-Security "max-age=31536000"; 

    # CORS
    add_header 'Access-Control-Allow-Origin' '*'; 
    add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS';
    add_header 'Access-Control-Allow-Headers' 'DNT,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Range';
    add_header 'Access-Control-Expose-Headers' 'Content-Length,Content-Range';
}

and docker-compose.yaml:

services:
  searxng:
    image: docker.io/searxng/searxng:latest
    volumes:
      - ./searxng:/etc/searxng:rw
    ports:
      - 31336:8080
    networks:
      - perplexica-network

  perplexica-backend:
    build:
      context: .
      dockerfile: backend.dockerfile
      args:
        - SEARXNG_API_URL=http://searxng:8080
    depends_on:
      - searxng
    ports:
      - 31338:31338
    networks:
      - perplexica-network

  perplexica-frontend:
    build:
      context: .
      dockerfile: app.dockerfile
      args:
        - NEXT_PUBLIC_API_URL=https://apidomain.of.myself/api
        - NEXT_PUBLIC_WS_URL=wss://apidomain.of.myself
        - PORT=31337
    depends_on:
      - perplexica-backend
    ports:
      - 31337:31337
    networks:
      - perplexica-network

networks:
  perplexica-network:

Both the backend and frontend utilize a reverse proxy.

Error log:

Failed to load resource: the server responded with a status of 404 ()           https://apidomain.of.myself/library?_rsc=acgkz

WebSocket connection to 'wss://apidomain.of.myself/?chatModel=GPT-3.5+turbo&chatModelProvider=openai&embeddingModel=Text+embedding+3+small&embeddingModelProvider=openai' failed: There was a bad response from the server.
zhuozhiyongde commented 1 month ago

Do I also need to replace the address/port of searXNG with an external HTTPS domain? If so, could you please tell me how to configure it? Thank you!

NityaSG commented 1 month ago

Do I also need to replace the address/port of searXNG with an external HTTPS domain? If so, could you please tell me how to configure it? Thank you!

Yes, in searxng-settings.yaml change bind_address: '127.0.0.1' to your external ip address

alexanderccc commented 1 month ago

I didn't have to touch the searxng config, I don't think it needs to be exposed as the backend uses it and not the frontend, so, as long as in the backend config (config.toml) the endpoint for it is defined correctly, you should be good.

For CORS I'm using Kong as an ingress controller which has a built-in plugin for handling it.

In your config for nginx you need to handle the CORS preflight check as part of an OPTIONS call, this post from Stackoverflow might help you with that setup

https://stackoverflow.com/questions/45986631/how-to-enable-cors-in-nginx-proxy-server

Also, be aware on the APIs that you're exposing, I think you would need to do an exact match on / for the frontend and the backend would need to be exposed for wss and the /api path, but I might be wrong, to keep it simple I exposed them with 2 different hosts (perplexica and perplexica-be) both on / with Prefix

zhuozhiyongde commented 1 month ago

I didn't have to touch the searxng config, I don't think it needs to be exposed as the backend uses it and not the frontend, so, as long as in the backend config (config.toml) the endpoint for it is defined correctly, you should be good.

For CORS I'm using Kong as an ingress controller which has a built-in plugin for handling it.

In your config for nginx you need to handle the CORS preflight check as part of an OPTIONS call, this post from Stackoverflow might help you with that setup

https://stackoverflow.com/questions/45986631/how-to-enable-cors-in-nginx-proxy-server

Also, be aware on the APIs that you're exposing, I think you would need to do an exact match on / for the frontend and the backend would need to be exposed for wss and the /api path, but I might be wrong, to keep it simple I exposed them with 2 different hosts (perplexica and perplexica-be) both on / with Prefix

According to the solution you mentioned, I did not modify searXNG. Instead, I moved the CORS-related configuration from the reverse proxy to the server block, but it still didn't work. When I click on this wss:// link, my backend returns a 502 Bad Gateway page. I would like to ask if there are any other configurations that need to be changed? Thank you for your reply.

ItzCrazyKns commented 1 month ago

Hi everyone, yesterday there were some issues related to networking those are fixed now so a few of the issues should be resolved. We unfortunately do not provide support to connect Perplexica to a domain but rather install it so we cannot help you in this

zhuozhiyongde commented 1 month ago

Hi everyone, yesterday there were some issues related to networking those are fixed now so a few of the issues should be resolved. We unfortunately do not provide support to connect Perplexica to a domain but rather install it so we cannot help you in this

So I have no way to expose it on the public network and use it through a domain name, right?

alexanderccc commented 1 month ago

I wouldn't say that there's no way, there is always a way to do something :sweat_smile:, but there is a difference between the project supporting it or not, I imagine @ItzCrazyKns wants to spend time actually developing the thing rather than providing support for running it remotely.

I for one have been able to run this on my home lab on kubernetes connected to ollama and I'm using it from my pc/laptop or phone, and honestly, it's been pretty awesome to run queries from my phone and test it out that way :grin:

Maybe we can close this off and open a separate issue to document the various setups for running it remotely?

zhuozhiyongde commented 1 month ago

I wouldn't say that there's no way, there is always a way to do something 😅, but there is a difference between the project supporting it or not, I imagine @ItzCrazyKns wants to spend time actually developing the thing rather than providing support for running it remotely.

I for one have been able to run this on my home lab on kubernetes connected to ollama and I'm using it from my pc/laptop or phone, and honestly, it's been pretty awesome to run queries from my phone and test it out that way 😁

Maybe we can close this off and open a separate issue to document the various setups for running it remotely?

I apologize for my inappropriate expression (and possibly disturbing emails). I did not intend to waste everyone's time; it's just that after genuinely trying and still failing, I felt quite discouraged and thus became a bit anxious. I think it would indeed be better to open a new issue.

ItzCrazyKns commented 1 month ago

Yes, I've created a new discussion here: https://github.com/ItzCrazyKns/Perplexica/discussions/111, here everyone can ask questions about deployment remotely and the community can provide support and help them overcome their issues, I myself would love to help them when I am free. @digitalw00t I hope your issue is resolved if not, please continue here otherwise I will mark this as closed in 24 hours.