Closed markwallace-microsoft closed 7 months ago
So... If I understand correctly you prefer connector project per one model (or model family) (even if has multiple deployment types) than project per deployment type where one project contains multiple models?
As I understand is a Connector per platform/API definition, so a connector can support the same model on a specific platform. i.e:
OpenAI/LMStudio -> Phi-2
HuggingFace -> Phi-2
@Krzysztof318, will review your PR today and give you a better feedback, is a big one, this one may be graduated to a feature branch later on.
@RogerBarreto I would appreciate for dedicated branch, it will also be more convenient for me to maintain small PRs.
The purpose of this task is to create an ADR to describe our AI Connector strategy and how new AI Connectors can be contributed.
Below are a series of user stories that must be addressed by the ADR.
Strategy
As a developer using Semantic Kernel I can use AI Connectors that access LLM's deployed in the Cloud e.g. OpenAI, Azure OpenAI, Hugging Face, ... or deployed locally so that I can configure by application to use the optimum LLM or to run completely offline.
Contributing an AI Connector
As a contributor to Semantic Kernel I can read the documentation which describes how to contribute an AI connector so that I can understand the development process and requirements to have a new AI Connector included in a Semantic Kernel release
P0 LLMs
Deployment types