nashsu / FreeAskInternet

FreeAskInternet is a completely free, PRIVATE and LOCALLY running search aggregator & answer generate using MULTI LLMs, without GPU needed. The user can ask a question and the system will make a multi engine search and combine the search result to LLM and generate the answer based on search results. It's all FREE to use.
Apache License 2.0
8.46k stars 885 forks source link

Project is Not Completely Local Nor Private #17

Open joeyame opened 6 months ago

joeyame commented 6 months ago

The description claims the following:

Both points are lies. While the search is performed locally, piping all that data into OpenAI's servers is not local nor private in the slightest. It's a cool idea, but at this point this repo doesn't provide anything more than me just using OpenAI's interface directly.

You do not get to claim that everything is local and private when you depend on an external web API. That goes against the whole meaning of those two words.

nashsu commented 6 months ago

I apologize if our description has caused any misunderstanding. You are right that relying on an external web API, in this case, OpenAI's, negates the notion of being entirely local and private.

This project was initially conceived as a proof-of-concept or experimental endeavor, with only 3 hour coding and opensourced. The original privacy concern is mostly on search, so I using online Free GPT3.5 to make most of using can running this without fancy hardware.

I'm working diligently to improve the project and plan to introduce more configuration options in the near future. This will allow users to choose whether they want to utilize a local deployment of LLM ( like llama.cpp or ollama) for increased privacy and localization.

Thx for your remind, I'll keep update and improve this project .

nashsu commented 6 months ago

actually I already complete design the whole new UI and about to complete dev. With new web UI you can set system using local running llm .

demo
keithorange commented 5 months ago

Yes please suppoer LLAMA and Mistral! Open models ASAP! I will not use openai because I want to seaarch anything!

automaton82 commented 5 months ago

Why not just use Flowise + Ollama? Flowise itself already has web scraping / searching ability, and Ollama can host any LLM including Mistal, Gemma, etc.

I don't see what this project is doing that those two together cannot already.

i486 commented 5 months ago

@nashsu

Can you add a guide on running it without Docker?

keithorange commented 5 months ago

Why not just use Flowise + Ollama? Flowise itself already has web scraping / searching ability, and Ollama can host any LLM including Mistal, Gemma, etc.

I don't see what this project is doing that those two together cannot already.

Flowise is so confusing compared to simple clean python API's ! But if you say so I will now learn FlowWise !

automaton82 commented 5 months ago

Why not just use Flowise + Ollama? Flowise itself already has web scraping / searching ability, and Ollama can host any LLM including Mistal, Gemma, etc. I don't see what this project is doing that those two together cannot already.

Flowise is so confusing compared to simple clean python API's ! But if you say so I will now learn FlowWise !

Hm not sure, I guess an opinion. Here is the exact docs on doing a Web Scrape QnA in Flowise, just swap the ChatGPT LLM with Ollama or LocalAI and you're good:

https://docs.flowiseai.com/use-cases/web-scrape-qna

Same flow is in the 'marketplace' in Flowise, and there are videos documenting how to do it too.