Closed caufieldjh closed 3 months ago
llm is an awesome library https://llm.datasette.io/
It was originally for command line usage (fun and really handy for testing and quick ad-hoc queries)
It now provides a first class API: https://llm.datasette.io/en/stable/python-api.html
It has a really nice plugin architecture with lots of open models available as plugins https://llm.datasette.io/en/stable/plugins/index.html
Plugins here: https://github.com/simonw/llm-plugins
The API abstracts away implementation specific differences
We should also consider making our logging consistent or just using it directly https://llm.datasette.io/en/stable/logging.html
On Fri, Jun 23, 2023 at 3:14 PM Harry Caufield @.***> wrote:
From #70 https://github.com/monarch-initiative/ontogpt/issues/70:
Currently, I cannot use ontoGPT in my organization as it is due to security concerns around external APIs like OpenAI's. To address this issue, I suggest exploring the possibility of using open-source, local LLMs and their APIs as an alternative to OpenAI's API.
I have a similar use case, however, could it ever be abstracted even further, with an option to use in-house LLM services? We are currently working on such a ChatGPT alternative, and it would be amazing to simply plug in that API for ontoGPT...
*Originally posted by @remerjohnson https://github.com/remerjohnson in
70 (comment)
https://github.com/monarch-initiative/ontogpt/issues/70#issuecomment-1604285035*
— Reply to this email directly, view it on GitHub https://github.com/monarch-initiative/ontogpt/issues/143, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAAMMONWXKERDAXRHGDAWB3XMYIKVANCNFSM6AAAAAAZSCUKGM . You are receiving this because you are subscribed to this thread.Message ID: @.***>
Just based on the way this API is set up, we can move away from some of the langchain-driven implementation to this
Going to prioritize this since I've found that some extractions, like disease or phenotype, work surprisingly well with a small model (like orca-mini-3b
) and llm
. Not sure what was going wrong with how langchain
interfaced with the gpt4all
models but I'm seeing some really usable results now.
Now supported by #373
From #70:
I have a similar use case, however, could it ever be abstracted even further, with an option to use in-house LLM services? We are currently working on such a ChatGPT alternative, and it would be amazing to simply plug in that API for ontoGPT...
Originally posted by @remerjohnson in https://github.com/monarch-initiative/ontogpt/issues/70#issuecomment-1604285035