There are many self-hosted Perplexity clones out there. I chose to make my own as I was dissatisfied with their non-existent integration with other self-hosted services and lack of multi-user support. Perplexideez is backed by a Postgres database & either Ollama or OpenAI compatible endpoints. It searches the web using a SearXNG instance.
Let AI do the hard work of sifting through search results for you.
Don't worry about hallucinations ruining your research. Just hover over the source annotation your LLM inserted and see the source it used. Click on it, and view the source directly.
Your LLM will generate great follow-up questions for you. This way, you can ask about what interested you in the response without typing a single second.
Stash your favourite searches as favourites. This way, you'll never lose them.
Perplexideez lets you use different models for different tasks, as appropriate. The robust environment variables and UI configuration allow you to make sure your self hosted resources are not overused.
Perplexideez supports many user accounts, with separated data, and using OIDC SSO. You can disable either sign up, password login, or both.
Perplexideez allows you to share links to others with the results of your searches. This way, you can send the interesting stuff to your friends easily.
When sharing a link, you can make sure only the people you want have access to it. Reroll the link's ID, require authentication to view, or disable it altogether.
Perplexideez creates beautiful embeds for all the links you share publicly. This way, the people you send it to know what they'll be looking at.
All of the containers provided by this project run as non-root by default. They're ready to be deployed in rootless environments.
Save for an in-progress generation, the containers are fully stateless. The feature that blocks exiting while a response is generated is still a work in progress, but they're ready to run in a Kubernetes environment without concern about rolling updates or higher numbers of replicas screwing things up.
ghcr.io/brunostjohn/perplexideez/migrate
performs required database migrations and prepares your Postgres instance to be used with Perplexideez. The only environment variable it requires is DATABASE_URL
.ghcr.io/brunostjohn/perplexideez/app
is the app itself. It requires the full environment variables mentioned below.Use the example Compose files in deploy/docker
to configure your own stack. These include the app, SearXNG, and a database. Use the .env.example
to get started, before running rename to .env
and make sure all the required values from the table below are filled out. The example stack does not provide neither Ollama nor OpenAI compatible endpoints. Setting that up is up to you.
I am still working on the Helm chart for this app. Writing Helm charts is quite the process so I'm procrastinating it. For now, please use my homelab Kubernetes manifests as an example for how to write your own to deploy this on your cluster.
Due to a lack of control over these environments and high variance between them, deploying without using container images is unsupported and such issues will go closed due to being out of scope.
Name | Required | Value | Example |
---|---|---|---|
DATABASE_URL |
✅ | An URL to a Postgres database. | postgresql://postgres:postgres@localhost:5432/postgres?schema=public |
Name | Required | Value | Example |
---|---|---|---|
PUBLIC_BASE_URL |
✅ | The public-facing URL to your instance. | https://perplexideez.domain.com |
RATE_LIMIT_SECRET |
✅ | A secret generated with openssl rand -base64 32 to be used for securing your sign in page. |
N/A |
AUTH_SECRET |
✅ | A secret generated with openssl rand -base64 32 to be used for securing your instance. |
N/A |
DISABLE_SIGN_UP |
❌ (default: false ) |
Whether or not to disable signing up to your instance. | true /false |
LOG_LEVEL |
❌ (default: info ) |
Which log level the app should use. | trace /debug /info /warn /error |
LOG_MODE |
❌ (default: json ) |
Whether to pretty print logs or use JSON logging. | pretty /json |
METRICS_PORT |
❌ (default: 9001 ) |
The port on which Prometheus metrics will be exposed. | 9001 |
Name | Required | Value | Example |
---|---|---|---|
OIDC_CLIENT_ID |
❌ | The client ID for your IDP. | N/A |
OIDC_CLIENT_SECRET |
❌ | The client secret for your IDP. | N/A |
OIDC_ISSUER |
❌ | The .well-known URL for you IDP. |
https://auth.authentik.com/application/o/perplexideez/.well-known/openid-configuration |
OIDC_SCOPES |
❌ (default: openid email profile ) |
The OIDC scopes to request from your IDP. | openid email profile |
PUBLIC_OIDC_NAME |
❌ | The identity provider name to show in the app's UI. | Zefir's Cloud |
DISABLE_PASSWORD_LOGIN |
❌ (default: false) | Whether or not to disable password authentication and hide it from the UI. | true /false |
https://perplexideez.yourdomain.com/auth/callback/generic-oauth
.Name | Required | Value | Example |
---|---|---|---|
SEARXNG_URL |
✅ | The URL for your SearXNG instance. | http://searxng:8080 |
Name | Required | Value | Example | |
---|---|---|---|---|
LLM_MODE |
✅ | ollama |
Which LLM provider to use. | ollama /openai |
LLM_SPEED_MODEL |
✅ | The LLM to use for generating responses in "Speed" mode. | gemma2:2b |
|
LLM_BALANCED_MODEL |
✅ | The LLM to use for generating responses in "Balanced" mode. | llama3.1:latest |
|
LLM_QUALITY_MODEL |
✅ | The LLM to use for generating responses in "Quality" mode. | qwen2.5:32b |
|
LLM_EMBEDDINGS_MODEL |
✅ | The LLM to use for text embeddings. | nomic-embed-text:latest |
|
LLM_TITLE_MODEL |
✅ | The LLM to use for generating chat titles. | llama3.1:latest |
|
LLM_EMOJI_MODEL |
✅ | The LLM to use for generating chat emojis. | llama3.1:latest |
|
LLM_IMAGE_SEARCH_MODEL |
✅ | The LLM to use for image searching. | llama3.1:latest |
|
LLM_VIDEO_SEARCH_MODEL |
✅ | The LLM to use for video searching. | llama3.1:latest |
Required only if LLM_MODE
is set to openai
.
Name | Required | Value | Example |
---|---|---|---|
OPENAI_BASE_URL |
✅ | The base URL to your OpenAI compatible endpoints. | https://chat.domain.com/v1 |
OPENAI_API_KEY |
✅ | The API key to use for requests. | sk-1234 |
Required only if LLM_MODE
is set to ollama
.
Name | Required | Value | Example |
---|---|---|---|
OLLAMA_URL |
✅ | The URL for your Ollama instance. | http://ollama:11434 |
To get a local copy up and running follow these simple example steps.
You need either an OpenAI API token/OpenAI compatible API or an Ollama instance running somewhere. The development container setup only provides Postgres and SearXNG.
corepack install pnpm
git clone https://github.com/brunostjohn/perplexideez.git
pnpm install
.env
file using the .env.example
pnpm dev:up
pnpm db:push
pnpm dev
See the open issues for a full list of proposed features (and known issues).
Contributions are what make the open source community such an amazing place to learn, inspire, and create. Any contributions you make are greatly appreciated.
If you have a suggestion that would make this better, please fork the repo and create a pull request. You can also simply open an issue with the tag "enhancement". Don't forget to give the project a star! Thanks again!
git checkout -b feature/AmazingFeature
)git commit -m 'Add some AmazingFeature'
)git push origin feature/AmazingFeature
)Distributed under the AGPL License.
Bruno St John - me@brunostjohn.com