Closed timothywarner closed 1 week ago
I think you logged this issue against the wrong product. This repo has nothing to do with GitHub Copilot. It's rather about a service to demo the RAG pattern using Semantic Kernel.
In this service, you can choose your LLM model by setting the AzureOpenAIText.Deployment value to whatever you desire.
Describe the bug Until today I was able to gain a reasonable level of insight into how GitHub Copilot/Chat behaves under the hood by inspecting the GitHub Copilot Chat extension log stream. The output now appears stripped of metadata like model chosen, tokens consumed, and so forth.
Because I pay for GitHub Copilot Enterprise, I expect to use GPT-4 pretty extensively if not exclusively. Until today I'd see many, many references to gpt-35-turbo in the Chat extension log stream.
To Reproduce Steps to reproduce the behavior:
Expected behavior I expected to see more detailed metadata concerning my client connection to the GitHub Copilot API.
Screenshots N/A
Platform
Additional context I'm not happy paying for GitHub Copilot's highest stock keeping unit (SKU) and using weaker GPT models.