rashadphz / farfalle

🔍 AI search engine - self-host with local or cloud LLMs
https://www.farfalle.dev/
Apache License 2.0
2.64k stars 234 forks source link
fastapi generative-ui gpt-4o groq llm nextjs ollama openai perplexity react search-engine shadcn-ui tailwindcss

Farfalle

Open-source AI-powered search engine. (Perplexity Clone)

Run local LLMs (llama3, gemma, mistral, phi3), custom LLMs through LiteLLM, or use cloud models (Groq/Llama3, OpenAI/gpt4-o)

https://github.com/rashadphz/farfalle/assets/20783686/9527a8c9-a13b-4e53-9cda-a3ab28d671b2

Please feel free to contact me on Twitter or create an issue if you have any questions.

💻 Live Demo

farfalle.dev (Cloud models only)

📖 Overview

🛣️ Roadmap

🛠️ Tech Stack

Features

🏃🏿‍♂️ Getting Started Locally

Prerequisites

Get API Keys

Quick Start:

git clone https://github.com/rashadphz/farfalle.git
cd farfalle && cp .env-template .env

Modify .env with your API keys (Optional, not required if using Ollama)

Start the app:

docker-compose -f docker-compose.dev.yaml up -d

Wait for the app to start then visit http://localhost:3000.

For custom setup instructions, see custom-setup-instructions.md

🚀 Deploy

Backend

Deploy to Render

After the backend is deployed, copy the web service URL to your clipboard. It should look something like: https://some-service-name.onrender.com.

Frontend

Use the copied backend URL in the NEXT_PUBLIC_API_URL environment variable when deploying with Vercel.

Deploy with Vercel

And you're done! 🥳

Use Farfalle as a Search Engine

To use Farfalle as your default search engine, follow these steps:

  1. Visit the settings of your browser
  2. Go to 'Search Engines'
  3. Create a new search engine entry using this URL: http://localhost:3000/?q=%s.
  4. Add the search engine.