Closed Willxiam closed 4 months ago
Thanks for the info! Will check it out.
I believe that it will work if you follow the LiteLLM/ollama settings in the guide (put the name of the model that you run in jan
under Chat: OpenAI (or compatible) custom model ID
), and set the endpoint to: http://localhost:1337/v1/chat/completions
Well that does not seem too hard. I hope to try it out but probably not until next week.
First thanks for the energy and effort that has been put into this plugin.
I found two additional local opensource options
I discovered https://github.com/janhq/jan when I was researching LM Studio. I came across it in a LM Studio post on reddit that expressed both praise of LMStudio and also concerns about its future as they go more and more commercial.
Someone from the jan.ai team also posted this https://twitter.com/janframework/status/1745472833579540722?t=osxIAvq8ztXuDbNAm11thA praising Ava
I just wanted to these on people's radar.
I am a little out of my element here, but I do hope to figure out a way to make this plugin work using the guide with my joplin.
Edited: corrected links