Open LostFool opened 7 months ago
What other LLM providers were you looking to use? Or do you want to be able to use your own locally hosted model?
For tagging, ideally local (or remotely hosted). But the anthropic or groq(hardware, not X) is always a consideration.
On Mon, Apr 1, 2024, 12:33 PM Luca Grippa @.***> wrote:
What other LLM providers were you looking to use? Or do you want to be able to use your own locally hosted model?
— Reply to this email directly, view it on GitHub https://github.com/lucagrippa/obsidian-ai-tagger/issues/3#issuecomment-2030215802, or unsubscribe https://github.com/notifications/unsubscribe-auth/AIIPGWOQGCG2IP4BKT4TQ4LY3GK7NAVCNFSM6AAAAABFP6JKEOVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMZQGIYTKOBQGI . You are receiving this because you authored the thread.Message ID: @.***>
Try out ollama, it runs locally and has a bunch of libraries. Mistral and Meta 7b api via hugging face is pretty decent
I've got that one on my list. I also like the librechat interface. My real problem is time. 50 hour job and a baby = no time. I just watch all this AI stuff getting anxious wanting to build stuff haha.
On Fri, Apr 5, 2024, 7:41 AM ShriyanshCode @.***> wrote:
Try out ollama, it runs locally and has a bunch of libraries. Mistral and Meta 7b api via hugging face is pretty decent
— Reply to this email directly, view it on GitHub https://github.com/lucagrippa/obsidian-ai-tagger/issues/3#issuecomment-2039707383, or unsubscribe https://github.com/notifications/unsubscribe-auth/AIIPGWILIOYPKBIFJYYBB73Y32LXNAVCNFSM6AAAAABFP6JKEOVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMZZG4YDOMZYGM . You are receiving this because you authored the thread.Message ID: @.***>
Going to try to add support for
Would like to look into supporting locally hosted models after that.
Thank you for all the feedback!
The latest release comes with support for MistralAI Small and Large models. Will hopefully be adding support for Groq and Anthropic soon
This plugin would be perfect for locally hosted. I don't feel like tagging is a complex gpt task. Info in --> minimal summary output, a perfect task for it. Therefore, even weak models should do an okay job and can easily be cleaned up after. Elimination of cost and privacy prohibition .
Just added support for GPT-4o
Can you please provide support for GPT-4o-mini is super cheap and fast!! https://openai.com/index/gpt-4o-mini-advancing-cost-efficient-intelligence/
Thank You
Just added support for GPT-4o mini :), sorry for the hold up
I'm running Ollama locally, and would really like to be able to use it as a back end. I can put the URL into the custom base URL field, it looks like all you would need to do is pull my list of models (Ollama is compatible with the OpenAI API)
New models were added in the most recent update and support for Ollama as well. Looks like there are some bugs with Ollama but I will get those all ironed out within the next few days
Can we use other APIs and LLMs? I'm trying to not get stuck in the OpenAI world.