microsoft / chat-copilot

MIT License
1.92k stars 650 forks source link

Chat extension log output is not as informative as it used to be #1019

Closed timothywarner closed 1 week ago

timothywarner commented 1 week ago

Describe the bug Until today I was able to gain a reasonable level of insight into how GitHub Copilot/Chat behaves under the hood by inspecting the GitHub Copilot Chat extension log stream. The output now appears stripped of metadata like model chosen, tokens consumed, and so forth.

Because I pay for GitHub Copilot Enterprise, I expect to use GPT-4 pretty extensively if not exclusively. Until today I'd see many, many references to gpt-35-turbo in the Chat extension log stream.

To Reproduce Steps to reproduce the behavior:

  1. Start a turn-based chat in the Chat pane.
  2. Click View > Output to open the Output panel.
  3. Switch the view to Output.
  4. Open the log selector drop-down list control and select GitHub Copilot Chat from the list.
  5. View the log contents and compare with output from, say, two weeks ago.

Expected behavior I expected to see more detailed metadata concerning my client connection to the GitHub Copilot API.

Screenshots N/A

Platform

Additional context I'm not happy paying for GitHub Copilot's highest stock keeping unit (SKU) and using weaker GPT models.

glahaye commented 1 week ago

I think you logged this issue against the wrong product. This repo has nothing to do with GitHub Copilot. It's rather about a service to demo the RAG pattern using Semantic Kernel.

In this service, you can choose your LLM model by setting the AzureOpenAIText.Deployment value to whatever you desire.