Your AI second brain. Get answers to your questions, whether they be online or in your own notes. Use online AI models (e.g gpt4) or private, local LLMs (e.g llama3). Self-host locally or use our cloud instance. Access from Obsidian, Emacs, Desktop app, Web or Whatsapp.
To make it easier for users to have an alternative to using on-edge computing for generating embeddings, provide the option of using one of the hugging face inference endpoints for each search model config.
You can see documentation for huggingface endpoints here
We're prioritizing endpoints for huggingface over openai, for instance, because the open source models provide many high quality options
Incoming