Closed Alerinos closed 1 week ago
@Alerinos what version of Semantic Kernel are you using?
@markwallace-microsoft 1.0.0-rc4
@markwallace-microsoft Do we have more information on this topic? I'm wondering whether to expect a fix or abandon SK and use the direct API.
@Alerinos, this should work. Looking at your code it looks like you're just adding a single model (gpt-3.5-turbo-1106), are you also adding one for gpt-4-vision-preview? If not, the default AI selection service will default to whatever is there.
If the default selection service isn't good enough, you can also create your own: https://github.com/microsoft/semantic-kernel/blob/main/dotnet/samples/KernelSyntaxExamples/Example62_CustomAIServiceSelector.cs
@matthewbolanos There are scenarios that he wants needs to use the better GPT 4 in one place and the cheaper GPT 3.5 in another We should let him decide in config what SK model he will use.
Yes, the way the system has been designed is that for each prompt, you can provide an ordered list of the models you want to use (which looks like what you are doing).
Within the kernel, you then want to add all the available models. During prompt invocation, the AI Service Selector will then use the configuration to match the prompt to the preferred model.
@matthewbolanos
My impression is that it doesn't work as it should.
How can I return the modelid result if Value is object dynamic? Why doesn't SK return model constants? Why does it have to be so difficult? To retrieve the used tokens I had to spend 2 hours looking for a way....
This issue is stale because it has been open for 90 days with no activity.
Currently ModelId in the execution settings is mostly used for Kernel invocations where the value of this property is used as a selection filter to get the service registered that matches the given modelId.
As our recommendation is to register different services per model and pick the one that you want to target.
When using GetChatMessageContentAsync, he wants to change his model in the settings. I am using DI and it registers the default model along with the token. I have a case that in one of the functionality wants to use GPT-4 from Vision, unfortunately if I change the model in the settings I get this message:
Code:
DI: