nashsu / FreeAskInternet

FreeAskInternet is a completely free, PRIVATE and LOCALLY running search aggregator & answer generate using MULTI LLMs, without GPU needed. The user can ask a question and the system will make a multi engine search and combine the search result to LLM and generate the answer based on search results. It's all FREE to use.
Apache License 2.0
8.51k stars 899 forks source link

FreeAskInternet

๐ŸŽ‰๐ŸŽ‰๐ŸŽ‰ Yeah we have a logo now! ๐ŸŽ‰๐ŸŽ‰๐ŸŽ‰

lgoo

Running www.perplexity.ai like app complete FREE, LOCAL, PRIVATE and NO GPU NEED on any computer [!IMPORTANT]
If you are unable to use this project normally, it is most likely due to issues with your internet connection or your IP, you need free internet connection to use this project normally. ๅฆ‚ๆžœๆ‚จๆ— ๆณ•ๆญฃๅธธไฝฟ็”จๆญค้กน็›ฎ๏ผŒๅพˆๅฏ่ƒฝๆ˜ฏ็”ฑไบŽๆ‚จ็š„ IP ๅญ˜ๅœจ้—ฎ้ข˜๏ผŒๆˆ–่€…ไฝ ไธ่ƒฝ่‡ช็”ฑ่ฎฟ้—ฎไบ’่”็ฝ‘ใ€‚

What is FreeAskInternet

FreeAskInternet is a completely free, private and locally running search aggregator & answer generate using LLM, Without GPU needed. The user can ask a question and the system will use searxng to make a multi engine search and combine the search result to the ChatGPT3.5 LLM and generate the answer based on search results. All process running locally and No GPU or OpenAI or Google API keys are needed.

Features

Screenshots

  1. index:

index

  1. Search based AI Chat:

index

  1. Multi LLM models and custom LLM like ollama support:

index

How It Works?

  1. System get user input question in FreeAskInternet UI interface( running locally), and call searxng (running locally) to make search on multi search engine.
  2. crawl search result links content and pass to ChatGPT3.5 / Kimi / Qwen / ZhipuAI / ollama (by using custom llm), ask LLM to answer user question based on this contents as references.
  3. Stream the answer to Chat UI.
  4. We support custom LLM setting, so theoretically infinite llm support.

Status

This project is still in its very early days. Expect some bugs.

Run the latest release

git clone https://github.com/nashsu/FreeAskInternet.git
cd ./FreeAskInternet
docker-compose up -d 

๐ŸŽ‰ You should now be able to open the web interface on http://localhost:3000. Nothing else is exposed by default.( For old web interface, accessing http://localhost:3030)

How to get and set Kimi / Qwen / ZhipuAI Token?

How to get Token?

We are using https://github.com/LLM-Red-Team projects to provide those service, you can reference to their readme.

Reference : https://github.com/LLM-Red-Team/kimi-free-api

setting token

How to using custom LLM like ollama? (Yes we love ollama)

  1. start ollama serve
export OLLAMA_HOST=0.0.0.0
ollama serve
  1. set ollama url in setting: You MUST using your computer's ip address, not localhost/127.0.0.1, because in docker you can't access this address. The model name is the model you want to serve by ollama. setting custom llm url

ollama model Reference : https://ollama.com/library

How to update to latest

cd ./FreeAskInternet
git pull
docker compose down
docker compose rm backend
docker compose rm free_ask_internet_ui
docker image rm nashsu/free_ask_internet
docker image rm nashsu/free_ask_internet_ui
docker-compose up -d

Credits

Special thanks to our logo designer

AdlerMurcus

License

Apache-2.0 license

Star History

Star History Chart