-
@ItzCrazyKns none of these LLM APIs providers for ollama LLMs or Azure OpenAI LLMs or embeddings models or any other LLMs don't work at all despite those changes per your docs cited there.
_Originall…
-
**Describe the bug**
Interface slow to load, when it does, I receive `Invalid connection` message, in settings (also slow to load) it doesn't detect installed Ollama models
**To Reproduce**
Steps…
-
**Is your feature request related to a problem? Please describe.**
When i try to load the page it just 'keeps loading' (like in issue: "eternal loading #101")
![screenshot](https://github.com/It…
-
Here are other errors that I am seeing in the searxng container. I am guessing they can be ignored, but wanted to check with you.
```bash
2024-05-11 21:51:36,045 WARNING:searx.engines.internetarchi…
-
**Describe the bug**
start
can connet to ollama
can connet to llama3:latest
send the first msg say "hi" to modal, the modal can work fine.
but send the second msg "hi,again" to modal, …
-
**Is your feature request related to a problem? Please describe.**
While ollama does not support authentication yet directly, many things that expose an ollama endpoint support authentication with th…
-
**Describe the bug**
I get an error message in step 5/5
=> ERROR [perplexica-frontend 5/5] RUN yarn build
**To Reproduce**
Steps to reproduce the behavior:
run the command: docker compose …
-
**Describe the bug**
The answer corresponsed to the querry language, like Chinese and Japanese, but the related question is still in English
**To Reproduce**
Ask a question in Chinese, like "亚马逊河…
-
**Describe the bug**
A clear and concise description of what the bug is.
work on localhost:3000port , when i use a domain:3000 , on remote pc ,it not work , i use local ollama
**To Reproduce**
St…
-
@ItzCrazyKns Is this Perplexica repo already production ready? If not, when will it become production ready?
![Screenshot_20240418_155627_Chrome.png](https://github.com/ItzCrazyKns/Perplexica/assets/…