ItzCrazyKns / Perplexica

Perplexica is an AI-powered search engine. It is an Open source alternative to Perplexity AI
MIT License
13.67k stars 1.3k forks source link

Perplexica not working on fresh installation #180

Open mystogan99 opened 3 months ago

mystogan99 commented 3 months ago

Describe the bug After installing and running Perplexica, the application does not load and keeps loading indefinitely. I have followed the installation instructions to the dot. There are no errors in the logs.

To Reproduce Steps to reproduce the behavior:

  1. Follow the installation instructions as provided in the documentation.
  2. Run the application.

Expected behavior The application should load and display the main interface.

Screenshots

Screenshot 2024-06-09 at 11 45 44 PM
mystogan99 commented 3 months ago

I have added the size as size? : number type for prop given to ThemeSwither component. At path ui/components/theme/Switcher.tsx. This is the only change I did. This should not cause any trouble.

LalithShiyam commented 3 months ago

Many thanks for bringing this up. I have the same issue as well.

bertusviljoen commented 3 months ago

Had the same issue

Amirjab21 commented 3 months ago

Are you guys running this on Docker?

I couldn't get it working without Docker as you have to run searxng at the same time.

If you pull and run the public docker image, it runs all components for you and it should work. (You still have to toggle the model selection on the UI to choose your model)

LalithShiyam commented 3 months ago

Hi @Amirjab21, thanks for chiming in. I do run via docker (ubuntu OS), everything builds successfully (frontend, backend, searxng). When I go to the page using my privateIP and port. It just keeps 'loading'. I can't choose much as everything is just loading as shown in the screenshot.

This project looks so cool, that I can't wait to get it up and running 😄

Amirjab21 commented 3 months ago

Hi @Amirjab21, thanks for chiming in. I do run via docker (ubuntu OS), everything builds successfully (frontend, backend, searxng). When I go to the page using my privateIP and port. It just keeps 'loading'. I can't choose much as everything is just loading as shown in the screenshot.

This project looks so cool, that I can't wait to get it up and running 😄

Did you remove the size attribute from ThemeSwitcher component on the UI? I had to do this as it was causing a typescript error

LalithShiyam commented 3 months ago

Absolutely, I had to - because the docker won't build without this and stops at build yarn.

Amirjab21 commented 3 months ago

Absolutely, I had to - because the docker won't build without this and stops at build yarn.

Which LLM are you using?

Heres my config.toml file:

[GENERAL] PORT = 3001 # Port to run the server on SIMILARITY_MEASURE = "cosine" # "cosine" or "dot"

[API_KEYS] OPENAI = "" # OpenAI API key - sk-1234567890abcdef1234567890abcdef GROQ = "" # Groq API key - gsk_1234567890abcdef1234567890abcdef

[API_ENDPOINTS] SEARXNG = "http://localhost:8080" # SearxNG API URL OLLAMA = "http://host.docker.internal:11434" # Ollama API URL - http://host.docker.internal:11434

I'm using Llama3 on Ollama (make sure Ollama is running)

Also had to make a copy of the .env file on the ui folder

LalithShiyam commented 3 months ago

I have ollama installed and I have llama3 and llama3:70b setup. And I confirm that Ollama is running, see attachment.

Screenshots:

image

This is my config.toml

[GENERAL]
PORT = 3001 # Port to run the server on
SIMILARITY_MEASURE = "cosine" # "cosine" or "dot"

[API_KEYS]
OPENAI = "" # OpenAI API key - sk-1234567890abcdef1234567890abcdef
GROQ = "" # Groq API key - gsk_1234567890abcdef1234567890abcdef

[API_ENDPOINTS]
SEARXNG = "http://localhost:32768" # SearxNG API URL
OLLAMA = "http://host.docker.internal:11434"

But I don't think I made a copy of the .env file into the UI folder though...

Also should the SearxNG API port be 8080?

LalithShiyam commented 3 months ago

Hi everyone,

I managed to resolve the issue, and I'd like to share the steps I followed. It turns out I missed some details in the documentation, particularly in the Networking.md file. Below are the steps I took to get everything working smoothly.

Steps to Resolve the Issue

  1. Verify Ollama is Running Ensure that the Ollama service is up and running if you are using it.

  2. Bring Down Existing Docker Services

    sudo docker compose down
  3. Update docker-compose.yaml Modify your docker-compose.yaml file to use the correct IP address of the server hosting Perplexica. Update the relevant sections as shown below:

     perplexica-frontend:
       build:
         context: .
         dockerfile: app.dockerfile
       args:
         - NEXT_PUBLIC_API_URL=http://xxx.xxx.xxx.xx:3001/api
         - NEXT_PUBLIC_WS_URL=ws://xxx.xxx.xxx.xx:3001 # Replace xxx.xxx.xxx.xx with your server's IP address
       depends_on:
         - perplexica-backend
       ports:
         - 3000:3000
  4. Update config.toml Similarly, update the config.toml file to reflect the correct IP addresses for the services. Here’s an example of how the configuration might look:

    [GENERAL]
    PORT = 3001  # Port to run the server on
    SIMILARITY_MEASURE = "cosine"  # Choose between "cosine" or "dot"
    
    [API_KEYS]
    OPENAI = ""  # Your OpenAI API key
    GROQ = ""  # Your Groq API key
    
    [API_ENDPOINTS]
    SEARXNG = "http://xxx.xxx.xxx.xx:32768"  # Replace xxx.xxx.xxx.xx with your server's IP address for SearxNG
    OLLAMA = "http://xxx.xxx.xxx.xx:11434"  # Replace xxx.xxx.xxx.xx with your server's IP address for Ollama
  5. Rebuild and Restart Docker Services After updating the configuration files, rebuild and restart your Docker services:

    sudo docker compose up -d --build
  6. Access Perplexica Once the services are up, access Perplexica via your web browser at:

    http://xxx.xxx.xxx.xx:3000

    Replace xxx.xxx.xxx.xx with your server’s IP address.

Enjoy using Perplexica!

Cheers and hope it helps, Lalith

N1RM4L13 commented 3 months ago

@LalithShiyam after following your mentioned steps , still face issues with loading Ollama

LalithShiyam commented 3 months ago

Hi @N1RM4L13 when you say issues in loading ollama - can you be a bit more elaborate?

N1RM4L13 commented 3 months ago

@LalithShiyam As you can see the ollama is active Screenshot 2024-06-11 140249

And when running the docker , i get the following error Screenshot 2024-06-11 140921

I have specified the server url in config.toml file

LalithShiyam commented 3 months ago

Strange, I haven't seen this error so far... Just to confirm, can you please check the http:xxx.xxx.xxx.xx:port (on your browser) where ollama is supposed to run. And let me know if you see something like this?

image

N1RM4L13 commented 3 months ago

Not able to see this in browser , but my port 11434 open

LalithShiyam commented 3 months ago

I think this might be a reason. But I am not sure. Are you able to run ollama from your terminal (ollama run llama3b) and if you do the same with ollama serve?

N1RM4L13 commented 3 months ago

I think this might be a reason. But I am not sure. Are you able to run ollama from your terminal (ollama run llama3b) and if you do the same with ollama serve?

Okay i fixed this by running ollama serve in a terminal and using docker command on another, but now it doesnt show ollama models list present in my server. Yes im able to run models locally.

N1RM4L13 commented 3 months ago

Hi everyone,

I managed to resolve the issue, and I'd like to share the steps I followed. It turns out I missed some details in the documentation, particularly in the Networking.md file. Below are the steps I took to get everything working smoothly.

Steps to Resolve the Issue

  1. Verify Ollama is Running Ensure that the Ollama service is up and running if you are using it.
  2. Bring Down Existing Docker Services
    sudo docker compose down
  3. Update docker-compose.yaml Modify your docker-compose.yaml file to use the correct IP address of the server hosting Perplexica. Update the relevant sections as shown below:
     perplexica-frontend:
       build:
         context: .
         dockerfile: app.dockerfile
       args:
         - NEXT_PUBLIC_API_URL=http://xxx.xxx.xxx.xx:3001/api
         - NEXT_PUBLIC_WS_URL=ws://xxx.xxx.xxx.xx:3001 # Replace xxx.xxx.xxx.xx with your server's IP address
       depends_on:
         - perplexica-backend
       ports:
         - 3000:3000
  4. Update config.toml Similarly, update the config.toml file to reflect the correct IP addresses for the services. Here’s an example of how the configuration might look:

    [GENERAL]
    PORT = 3001  # Port to run the server on
    SIMILARITY_MEASURE = "cosine"  # Choose between "cosine" or "dot"
    
    [API_KEYS]
    OPENAI = ""  # Your OpenAI API key
    GROQ = ""  # Your Groq API key
    
    [API_ENDPOINTS]
    SEARXNG = "http://xxx.xxx.xxx.xx:32768"  # Replace xxx.xxx.xxx.xx with your server's IP address for SearxNG
    OLLAMA = "http://xxx.xxx.xxx.xx:11434"  # Replace xxx.xxx.xxx.xx with your server's IP address for Ollama
  5. Rebuild and Restart Docker Services After updating the configuration files, rebuild and restart your Docker services:
    sudo docker compose up -d --build
  6. Access Perplexica Once the services are up, access Perplexica via your web browser at:

    http://xxx.xxx.xxx.xx:3000

    Replace xxx.xxx.xxx.xx with your server’s IP address.

Enjoy using Perplexica!

Cheers and hope it helps, Lalith

Hello Everyone if anyone face this error after following the steps mention by @LalithShiyam Screenshot 2024-06-11 140921

  1. Run OLLAMA_HOST=0.0.0.0 ollama serve (Make sure port number is open in inbound rules , default port is 11434)

Check by typing http://x.xxx.xxx.xx:11434 in browser. The browser will show the following. Screenshot 2024-06-11 145018

  1. Run the command ollama run (model_name)

  2. Now run docker compose up --build

LalithShiyam commented 3 months ago

Great @N1RM4L13 so everything is working now right?

Just10McGill commented 3 months ago

Thank you guys so much, I was having this problem as well.

N1RM4L13 commented 3 months ago

Great @N1RM4L13 so everything is working now right? Yes , thanks for the fix above 💪🏻

LalithShiyam commented 3 months ago

@N1RM4L13 Fantastic - glad it worked out! 🥳

menelic commented 3 months ago

Unfortunately I cannot get it to work on ubuntu 22.04, with ollama up and running

image

and with all of @LalithShiyam s instructions followed

the perplexica page can be opened, it first shows the perplexica interface, but typing anything in there does not lead to any results. When clicking on say the image search option, the perplexia interface changes to read:

image

I have shut down and re-upped docker severally, I have run ollama both inside and oustide the directory, in the same and in another terminal...please assist, I really want to get this running!

LalithShiyam commented 3 months ago

Hi @menelic, seems like this could be a new error altogether, because I never got the internal server error. But I might be wrong. And it was ubuntu 22.04 for me as well.

Ranganaths commented 3 months ago

Even i get the same error

braindotai commented 3 months ago

Same issue here, I just don't understand... ollama is running locally on my system. Docker backend simply has to make a request to ollama's URL in the host (local)...

extra_hosts:
    - "host.docker.internal:host-gateway"

Above code in docker compose should be good to go if everything else is implemented correctly. What's the big deal with this? This should not be that hard to figure it out

mystogan99 commented 3 months ago

Is it just me or are the results extremely slow for everyone? I am running this on a CPU-only instance. Ubuntu 23.10 128 Gigs Ram 20 vCPU

creuzerm commented 3 months ago

The additional config on config.toml and docker-compose.yaml fixed my forever-loading-wheel as well on ubuntu24 docker. So I think we have a deficiency in the quick start instructions.

jiafu83 commented 2 months ago

Hi everyone,

I managed to resolve the issue, and I'd like to share the steps I followed. It turns out I missed some details in the documentation, particularly in the Networking.md file. Below are the steps I took to get everything working smoothly.

Steps to Resolve the Issue

  1. Verify Ollama is Running Ensure that the Ollama service is up and running if you are using it.
  2. Bring Down Existing Docker Services
    sudo docker compose down
  3. Update docker-compose.yaml Modify your docker-compose.yaml file to use the correct IP address of the server hosting Perplexica. Update the relevant sections as shown below:
     perplexica-frontend:
       build:
         context: .
         dockerfile: app.dockerfile
       args:
         - NEXT_PUBLIC_API_URL=http://xxx.xxx.xxx.xx:3001/api
         - NEXT_PUBLIC_WS_URL=ws://xxx.xxx.xxx.xx:3001 # Replace xxx.xxx.xxx.xx with your server's IP address
       depends_on:
         - perplexica-backend
       ports:
         - 3000:3000
  4. Update config.toml Similarly, update the config.toml file to reflect the correct IP addresses for the services. Here’s an example of how the configuration might look:

    [GENERAL]
    PORT = 3001  # Port to run the server on
    SIMILARITY_MEASURE = "cosine"  # Choose between "cosine" or "dot"
    
    [API_KEYS]
    OPENAI = ""  # Your OpenAI API key
    GROQ = ""  # Your Groq API key
    
    [API_ENDPOINTS]
    SEARXNG = "http://xxx.xxx.xxx.xx:32768"  # Replace xxx.xxx.xxx.xx with your server's IP address for SearxNG
    OLLAMA = "http://xxx.xxx.xxx.xx:11434"  # Replace xxx.xxx.xxx.xx with your server's IP address for Ollama
  5. Rebuild and Restart Docker Services After updating the configuration files, rebuild and restart your Docker services:
    sudo docker compose up -d --build
  6. Access Perplexica Once the services are up, access Perplexica via your web browser at:

    http://xxx.xxx.xxx.xx:3000

    Replace xxx.xxx.xxx.xx with your server’s IP address.

Enjoy using Perplexica!

Cheers and hope it helps, Lalith

Thanks, this really sovles my problem, the key is give the real ip address, instead of 127.0.0.1, for other computers to use it in the lan

gerhardmpl commented 2 months ago

The forever-loading-wheel may also mean that your firewall is blocking port 3001/tcp. So check your firewall after you have changed config.toml and docker-compose.yaml (which is indeed necessary).

ncecere commented 2 months ago

The thing i forgot to do was run

docker compose down --rmi all

because of

  perplexica-frontend:
    build:
      context: .
      dockerfile: app.dockerfile
    args:
      - NEXT_PUBLIC_API_URL=http://xxx.xxx.xxx.xx:3001/api
      - NEXT_PUBLIC_WS_URL=ws://xxx.xxx.xxx.xx:3001 # Replace xxx.xxx.xxx.xx with your server's IP address
    depends_on:
      - perplexica-backend
    ports:
      - 3000:3000

the arguments NEXT_PUBLIC_API_URL and NEXT_PUBLIC_WS_URL are used when building the frontend container. I just kept doing a docker compose down and never rebuilt the container images with the new IP information.

superwhyun commented 2 months ago

A simple solution for me.

  1. kill docker images sudo docker-compose down
  2. edit the config file as readme says.
  3. re-build sudo docker-compose build perplexica-backend
  4. restart sudo docker-compose up -d

Since the backend docker image just copies the config file into it, it needs to be rebuilt.