Open mystogan99 opened 3 months ago
I have added the size as size? : number
type for prop given to ThemeSwither
component. At path ui/components/theme/Switcher.tsx
. This is the only change I did. This should not cause any trouble.
Many thanks for bringing this up. I have the same issue as well.
Had the same issue
Are you guys running this on Docker?
I couldn't get it working without Docker as you have to run searxng at the same time.
If you pull and run the public docker image, it runs all components for you and it should work. (You still have to toggle the model selection on the UI to choose your model)
Hi @Amirjab21, thanks for chiming in. I do run via docker (ubuntu OS), everything builds successfully (frontend, backend, searxng). When I go to the page using my privateIP and port. It just keeps 'loading'. I can't choose much as everything is just loading as shown in the screenshot.
This project looks so cool, that I can't wait to get it up and running 😄
Hi @Amirjab21, thanks for chiming in. I do run via docker (ubuntu OS), everything builds successfully (frontend, backend, searxng). When I go to the page using my privateIP and port. It just keeps 'loading'. I can't choose much as everything is just loading as shown in the screenshot.
This project looks so cool, that I can't wait to get it up and running 😄
Did you remove the size attribute from ThemeSwitcher component on the UI? I had to do this as it was causing a typescript error
Absolutely, I had to - because the docker won't build without this and stops at build yarn.
Absolutely, I had to - because the docker won't build without this and stops at build yarn.
Which LLM are you using?
Heres my config.toml file:
[GENERAL] PORT = 3001 # Port to run the server on SIMILARITY_MEASURE = "cosine" # "cosine" or "dot"
[API_KEYS] OPENAI = "" # OpenAI API key - sk-1234567890abcdef1234567890abcdef GROQ = "" # Groq API key - gsk_1234567890abcdef1234567890abcdef
[API_ENDPOINTS] SEARXNG = "http://localhost:8080" # SearxNG API URL OLLAMA = "http://host.docker.internal:11434" # Ollama API URL - http://host.docker.internal:11434
I'm using Llama3 on Ollama (make sure Ollama is running)
Also had to make a copy of the .env file on the ui folder
I have ollama installed and I have llama3 and llama3:70b setup. And I confirm that Ollama is running, see attachment.
Screenshots:
This is my config.toml
[GENERAL]
PORT = 3001 # Port to run the server on
SIMILARITY_MEASURE = "cosine" # "cosine" or "dot"
[API_KEYS]
OPENAI = "" # OpenAI API key - sk-1234567890abcdef1234567890abcdef
GROQ = "" # Groq API key - gsk_1234567890abcdef1234567890abcdef
[API_ENDPOINTS]
SEARXNG = "http://localhost:32768" # SearxNG API URL
OLLAMA = "http://host.docker.internal:11434"
But I don't think I made a copy of the .env file into the UI folder though...
Also should the SearxNG API port be 8080?
Hi everyone,
I managed to resolve the issue, and I'd like to share the steps I followed. It turns out I missed some details in the documentation, particularly in the Networking.md
file. Below are the steps I took to get everything working smoothly.
Verify Ollama is Running Ensure that the Ollama service is up and running if you are using it.
Bring Down Existing Docker Services
sudo docker compose down
Update docker-compose.yaml
Modify your docker-compose.yaml
file to use the correct IP address of the server hosting Perplexica. Update the relevant sections as shown below:
perplexica-frontend:
build:
context: .
dockerfile: app.dockerfile
args:
- NEXT_PUBLIC_API_URL=http://xxx.xxx.xxx.xx:3001/api
- NEXT_PUBLIC_WS_URL=ws://xxx.xxx.xxx.xx:3001 # Replace xxx.xxx.xxx.xx with your server's IP address
depends_on:
- perplexica-backend
ports:
- 3000:3000
Update config.toml
Similarly, update the config.toml
file to reflect the correct IP addresses for the services. Here’s an example of how the configuration might look:
[GENERAL]
PORT = 3001 # Port to run the server on
SIMILARITY_MEASURE = "cosine" # Choose between "cosine" or "dot"
[API_KEYS]
OPENAI = "" # Your OpenAI API key
GROQ = "" # Your Groq API key
[API_ENDPOINTS]
SEARXNG = "http://xxx.xxx.xxx.xx:32768" # Replace xxx.xxx.xxx.xx with your server's IP address for SearxNG
OLLAMA = "http://xxx.xxx.xxx.xx:11434" # Replace xxx.xxx.xxx.xx with your server's IP address for Ollama
Rebuild and Restart Docker Services After updating the configuration files, rebuild and restart your Docker services:
sudo docker compose up -d --build
Access Perplexica Once the services are up, access Perplexica via your web browser at:
http://xxx.xxx.xxx.xx:3000
Replace xxx.xxx.xxx.xx
with your server’s IP address.
Enjoy using Perplexica!
Cheers and hope it helps, Lalith
@LalithShiyam after following your mentioned steps , still face issues with loading Ollama
Hi @N1RM4L13 when you say issues in loading ollama - can you be a bit more elaborate?
@LalithShiyam As you can see the ollama is active
And when running the docker , i get the following error
I have specified the server url in config.toml file
Strange, I haven't seen this error so far... Just to confirm, can you please check the http:xxx.xxx.xxx.xx:port (on your browser) where ollama is supposed to run. And let me know if you see something like this?
Not able to see this in browser , but my port 11434 open
I think this might be a reason. But I am not sure. Are you able to run ollama from your terminal (ollama run llama3b) and if you do the same with ollama serve?
I think this might be a reason. But I am not sure. Are you able to run ollama from your terminal (ollama run llama3b) and if you do the same with ollama serve?
Okay i fixed this by running ollama serve in a terminal and using docker command on another, but now it doesnt show ollama models list present in my server. Yes im able to run models locally.
Hi everyone,
I managed to resolve the issue, and I'd like to share the steps I followed. It turns out I missed some details in the documentation, particularly in the
Networking.md
file. Below are the steps I took to get everything working smoothly.Steps to Resolve the Issue
- Verify Ollama is Running Ensure that the Ollama service is up and running if you are using it.
- Bring Down Existing Docker Services
sudo docker compose down
- Update
docker-compose.yaml
Modify yourdocker-compose.yaml
file to use the correct IP address of the server hosting Perplexica. Update the relevant sections as shown below:perplexica-frontend: build: context: . dockerfile: app.dockerfile args: - NEXT_PUBLIC_API_URL=http://xxx.xxx.xxx.xx:3001/api - NEXT_PUBLIC_WS_URL=ws://xxx.xxx.xxx.xx:3001 # Replace xxx.xxx.xxx.xx with your server's IP address depends_on: - perplexica-backend ports: - 3000:3000
Update
config.toml
Similarly, update theconfig.toml
file to reflect the correct IP addresses for the services. Here’s an example of how the configuration might look:[GENERAL] PORT = 3001 # Port to run the server on SIMILARITY_MEASURE = "cosine" # Choose between "cosine" or "dot" [API_KEYS] OPENAI = "" # Your OpenAI API key GROQ = "" # Your Groq API key [API_ENDPOINTS] SEARXNG = "http://xxx.xxx.xxx.xx:32768" # Replace xxx.xxx.xxx.xx with your server's IP address for SearxNG OLLAMA = "http://xxx.xxx.xxx.xx:11434" # Replace xxx.xxx.xxx.xx with your server's IP address for Ollama
- Rebuild and Restart Docker Services After updating the configuration files, rebuild and restart your Docker services:
sudo docker compose up -d --build
Access Perplexica Once the services are up, access Perplexica via your web browser at:
http://xxx.xxx.xxx.xx:3000
Replace
xxx.xxx.xxx.xx
with your server’s IP address.Enjoy using Perplexica!
Cheers and hope it helps, Lalith
Hello Everyone if anyone face this error after following the steps mention by @LalithShiyam
OLLAMA_HOST=0.0.0.0 ollama serve
(Make sure port number is open in inbound rules , default port is 11434)Check by typing http://x.xxx.xxx.xx:11434 in browser. The browser will show the following.
Run the command ollama run (model_name)
Now run docker compose up --build
Great @N1RM4L13 so everything is working now right?
Thank you guys so much, I was having this problem as well.
Great @N1RM4L13 so everything is working now right? Yes , thanks for the fix above 💪🏻
@N1RM4L13 Fantastic - glad it worked out! 🥳
Unfortunately I cannot get it to work on ubuntu 22.04, with ollama up and running
and with all of @LalithShiyam s instructions followed
the perplexica page can be opened, it first shows the perplexica interface, but typing anything in there does not lead to any results. When clicking on say the image search option, the perplexia interface changes to read:
I have shut down and re-upped docker severally, I have run ollama both inside and oustide the directory, in the same and in another terminal...please assist, I really want to get this running!
Hi @menelic, seems like this could be a new error altogether, because I never got the internal server error. But I might be wrong. And it was ubuntu 22.04 for me as well.
Even i get the same error
Same issue here, I just don't understand... ollama is running locally on my system. Docker backend simply has to make a request to ollama's URL in the host (local)...
extra_hosts:
- "host.docker.internal:host-gateway"
Above code in docker compose should be good to go if everything else is implemented correctly. What's the big deal with this? This should not be that hard to figure it out
Is it just me or are the results extremely slow for everyone? I am running this on a CPU-only instance. Ubuntu 23.10 128 Gigs Ram 20 vCPU
The additional config on config.toml and docker-compose.yaml fixed my forever-loading-wheel as well on ubuntu24 docker. So I think we have a deficiency in the quick start instructions.
Hi everyone,
I managed to resolve the issue, and I'd like to share the steps I followed. It turns out I missed some details in the documentation, particularly in the
Networking.md
file. Below are the steps I took to get everything working smoothly.Steps to Resolve the Issue
- Verify Ollama is Running Ensure that the Ollama service is up and running if you are using it.
- Bring Down Existing Docker Services
sudo docker compose down
- Update
docker-compose.yaml
Modify yourdocker-compose.yaml
file to use the correct IP address of the server hosting Perplexica. Update the relevant sections as shown below:perplexica-frontend: build: context: . dockerfile: app.dockerfile args: - NEXT_PUBLIC_API_URL=http://xxx.xxx.xxx.xx:3001/api - NEXT_PUBLIC_WS_URL=ws://xxx.xxx.xxx.xx:3001 # Replace xxx.xxx.xxx.xx with your server's IP address depends_on: - perplexica-backend ports: - 3000:3000
Update
config.toml
Similarly, update theconfig.toml
file to reflect the correct IP addresses for the services. Here’s an example of how the configuration might look:[GENERAL] PORT = 3001 # Port to run the server on SIMILARITY_MEASURE = "cosine" # Choose between "cosine" or "dot" [API_KEYS] OPENAI = "" # Your OpenAI API key GROQ = "" # Your Groq API key [API_ENDPOINTS] SEARXNG = "http://xxx.xxx.xxx.xx:32768" # Replace xxx.xxx.xxx.xx with your server's IP address for SearxNG OLLAMA = "http://xxx.xxx.xxx.xx:11434" # Replace xxx.xxx.xxx.xx with your server's IP address for Ollama
- Rebuild and Restart Docker Services After updating the configuration files, rebuild and restart your Docker services:
sudo docker compose up -d --build
Access Perplexica Once the services are up, access Perplexica via your web browser at:
http://xxx.xxx.xxx.xx:3000
Replace
xxx.xxx.xxx.xx
with your server’s IP address.Enjoy using Perplexica!
Cheers and hope it helps, Lalith
Thanks, this really sovles my problem, the key is give the real ip address, instead of 127.0.0.1, for other computers to use it in the lan
The forever-loading-wheel may also mean that your firewall is blocking port 3001/tcp. So check your firewall after you have changed config.toml and docker-compose.yaml (which is indeed necessary).
The thing i forgot to do was run
docker compose down --rmi all
because of
perplexica-frontend:
build:
context: .
dockerfile: app.dockerfile
args:
- NEXT_PUBLIC_API_URL=http://xxx.xxx.xxx.xx:3001/api
- NEXT_PUBLIC_WS_URL=ws://xxx.xxx.xxx.xx:3001 # Replace xxx.xxx.xxx.xx with your server's IP address
depends_on:
- perplexica-backend
ports:
- 3000:3000
the arguments NEXT_PUBLIC_API_URL
and NEXT_PUBLIC_WS_URL
are used when building the frontend container. I just kept doing a docker compose down and never rebuilt the container images with the new IP information.
A simple solution for me.
sudo docker-compose down
sudo docker-compose build perplexica-backend
sudo docker-compose up -d
Since the backend docker image just copies the config file into it, it needs to be rebuilt.
Describe the bug After installing and running Perplexica, the application does not load and keeps loading indefinitely. I have followed the installation instructions to the dot. There are no errors in the logs.
To Reproduce Steps to reproduce the behavior:
Expected behavior The application should load and display the main interface.
Screenshots