Closed chr-hertel closed 1 week ago
The discovery for versions is much worse imho 😟
For the bigger picture of supporting more combinations of Provider / Model it looks like an idea. Even it is something very abstract currently for me to see how this improve the library for the future but when looking at the linked pages ... yeah. Good idea.
But why then the namespace Language
? Is it meant for Models working with Language Tokens and be open for another namespace for models working with other stuff at a Platform
? Just for the further considerations for this decision.
The discovery for versions is much worse imho 😟
There is still autocomplete/discovery possible. You don't have to choose between different Version
classes but can use the constants of the class - which can be referenced in yaml as well.
But why then the namespace Language?
It's referring to the Interfaces at first level (LanguageModel
vs. EmbeddingsModel
) and basically only meant to have a substructure.
Thought about supporting more categories, but would rather extend the catalog and features for those categories before adopting more.
Thanks for the feedback guys 🙏
Maybe as a last addition, my goal would be to bring the ease of using different models, like those platforms (replicate, huggingface, etc) provide, to this lib... But it's not necessary to merge this now without that feature - might be a larger feature branch
What I meant with the version class discovery is, that if I end up with one version class containing sonnet and gpt4o for example, what prevents me to setup my LLM Platform OpenAI with Sonnet?
That's not the case here. I switched from additional Version VO to plain string constants in the corresponding model
And also before that the string was not validated and I could have entered whatever in the version object
Ok 👍🏻
But now one can (like before) only use a new model when updating the lib 😟
But now one can (like before) only use a new model when updating the lib 😟
That would be a show stopper, true, but i can still use new Gpt($platform, 'my-finetuned-model')
or do i miss something?
Hmm, right, i can't indicate the capabilities of the model with Image, Tool or structured output Support ... 🤔
Bottom line: good point 😆 👍
What about dedicated model classes which know if they support a platform?
You know what I mean? I am currently on a phone an cannot provide a code example
Yup, same here - not really at my desk today. Still thinking tho about the different characteristics per model.
For the first two items we can provide some helpers, like DX gimmicks, but shouldn't be too strict since validation and change scenarios are not ours. Basically for the last two Change scenarios and validation is not owned by us as well, but the main features of the libs rely on that. Just writing that down to reflect and think, no argument or action item - only to make those concerns more visible - maybe even in code
With #140 there is a new Model
interface that introduces a getVersion
+ getOptions
and in combination the LLM interface that formalizes the capabilities, i think the concerns of this PR are adressed
based on the listings of azure, ollama or huggingface the libraries structure/namespaces will scale better over time when not using vendor names like
Anthropic
orOpenAI
as top level namespaces, but rather structural ones likePlatform
andModel
.I decided against having a tree like structure but splitting into those two namespaces, since it is more like a matrix when looking at some providers supporting multiple different llms.
i'm aware that this will cause larger pain on downstream dependents and might combine it with a further refactoring of
Document
,Message
andResponse
classes