Open MichaelSchmid-AGI opened 1 week ago
Hi @MichaelSchmid-AGI , Thanks for reaching out.
I guess the feature request is about supporting more direct LLM consumption.
I'm not sure whether we will reuse the same AzureOpenAiChatClient
, though.
Are you aware our orchestration package: @sap-ai-sdk/orchestration
and the orchestration service?
If you want to consume other LLMs, you can use the harmonised API offered by orchestration service now.
Describe the Problem
Hi there, i am struggling to get LLMS running that arent from OpenAI with your langchain module. Per default you seem to filter out LLMS which are in a different "excecutableId" than "azure-openai"
This makes using different Models seemingly impossible (for now)
Propose a Solution
I would suggest that you allow to pass a excecutableId when initializing a langchain chat client
this should allow the usage of other models quite easily without having to rewrite much.
The chat-completion api ( for example when using llama or mixtral) seems identical to payloads working with OpenAI models.
POST {baseurl}/v2/inference/deployments/{deploymentID}/chat/completions
Body for GPT 4o:Body For Llama 3.1 Instruct
Describe Alternatives
No response
Affected Development Phase
Development
Impact
Inconvenience
Timeline
No response
Additional Context
No response