Closed nguyentran0212 closed 3 months ago
What can you see at http://127.0.0.1:3001
@ItzCrazyKns at 127.0.0.1:3001, I see Cannot GET /
@ItzCrazyKns at 127.0.0.1:3001, I see
Cannot GET /
@ItzCrazyKns I also encountered the same problem and backend logs, is it a problem with my proxy or the openai key
Are you using custom OpenAI or just normal OpenAI provider? The error is unrelated to Perplexica, seems with an issue with your configuration
@ItzCrazyKns at 127.0.0.1:3001, I see
Cannot GET /
I use Ollama. The backend gets the correct model from my Ollama, but it seems that front-end cannot communicate with backend via 127.0.0.1.
I tested with both safari and chrome.
My device is MacBook Pro M1 running on MacOS 14.3.1.
Firewall is disabled on my MacBook.
@ItzCrazyKns at 127.0.0.1:3001, I see
Cannot GET /
I use Ollama. The backend gets the correct model from my Ollama, but it seems that front-end cannot communicate with backend via 127.0.0.1.
I tested with both safari and chrome.
My device is MacBook Pro M1 running on MacOS 14.3.1.
Firewall is disabled on my MacBook.
That's strange can you try re-install Perplexica or provide me with the exact steps so I can try to reproduce?
Boss, may I ask a question? Due to network restrictions on my server, I am unable to access SearxNG. I would like to perform front-end and back-end separation. I understand that we are divided into four parts: front-end and back-end, SearxNG, and model services. I understand that as long as SearxNG has a VPN network, it can be used normally. Now I have two environments. Mac can use perlexity to search normally, but my Linux is not working. The above is my configuration. My Olama saw that there was a call, but the backend logs @ItzCrazyKns
Can you try sending doing a curl request to the same address that you're using for Ollama in the same computer as Perplexica.
curl <address>
@ItzCrazyKns Hi, sorry for the late response. I pulled the latest version from GitHub 5 minutes ago and run docker compose up -d
again. This time it works. There were 4 new commits pulled, so I guess some of them did the trick.
Describe the bug Frontend shows "No chat models available" error. Upon inspections in the browser, I find that get request to
http://127.0.0.1:3001/api/model
has 403 error (see screenshots)To Reproduce Steps to reproduce the behavior:
Expected behavior Perplexica is usable.
Screenshots
403 error from front-end:
Response from api/model at 127.0.0.1:3001 (Ollama model was found correctly)
Additional context
Running on MacOS.