your-papa / obsidian-Smart2Brain

An Obsidian plugin to interact with your privacy focused AI-Assistant making your second brain even smarter!
GNU Affero General Public License v3.0
370 stars 23 forks source link
ai chatgpt embeddings obsidian-md obsidian-plugin ollama rag
![2-05](https://github.com/your-papa/obsidian-Smart2Brain/assets/48623649/0f9671ab-c39a-46f1-b3e8-bc045b578965)

Your Smart Second Brain

Your Smart Second Brain is a free and open-source Obsidian plugin to improve your overall knowledge management. It serves as your personal assistant, powered by large language models like ChatGPT or Llama2. It can directly access and process your notes, eliminating the need for manual prompt editing and it can operate completely offline, ensuring your data remains private and secure.

S2B Chat

🌟 Features

📝 Chat with your Notes

🤖 Choose ANY preferred Large Language Model (LLM)

⚠️ Limitations

🔧 Getting started

[!NOTE]
If you use Obsidian Sync the vector store binaries might take up a lot of space due to the version history.
Exclude the .obsidian/plugins/smart-second-brain/vectorstores folder in the Obsidian Sync settings to avoid this.

Follow the onboarding instructions provided on initial plugin startup in Obsidian.

⚙️ Under the hood

Check out our Architecture Wiki page and our backend repo papa-ts.

🎯 Roadmap

🧑‍💻 About us

We initially made this plugin as part of a university project, which is now complete. However, we are still fully committed to developing and improving the assistant in our spare time. This and the papa-ts (backend) repo serve as an experimental playground, allowing us to explore state-of-the-art AI topics further and as a tool to enrich the obsidian experience we’re so passionate about. If you have any suggestions or wish to contribute, we would greatly appreciate it.

📢 You want to support?

❓ FAQ

Don't hesitate to ask your question in the Q&A

Are any queries sent to the cloud?

The queries are sent to the cloud only if you choose to use OpenAI's models. You can also choose Ollama to run your models locally. Therefore, your data will never be sent to any cloud services and stay on your machine.

How does it differ from the SmartConnections plugin?

Our plugin is quite similar to Smart Connections. However, we improve it based on our experience and the research we do for the university.

For now, these are the main differences:

What models do you recommend?

OpenAI's models are still the most capable. Especially "GPT-4" and "text-embedding-3-large". The best working local embedding modal we tested so far would be "mxbai-embed-large".

Does it support multi-language vaults?

It’s supported, although the response quality may vary depending on which prompt language is used internally (we will support more translations in the future) and which models you use. It should work best with OpenAI's "text-embedding-large-3" model.