microsoft / vscode-copilot-release

Feedback on GitHub Copilot Chat UX in Visual Studio Code.
https://marketplace.visualstudio.com/items?itemName=GitHub.copilot-chat
Creative Commons Attribution 4.0 International
315 stars 28 forks source link

Allow better logging/debugging #1253

Closed frblondin closed 3 months ago

frblondin commented 3 months ago

Type: Feature

I am currently working on an extension which adds variables. There have been various bugs raised about (#1066, #971, #962...), and there is no way I know to check the prompt message being produced & sent to LLM. If my variable produces 10 ChatVariableValue instances, there is no way I can check which were used or withdrawn (all? none of them?).

I suggest to add logs at Debug level in <vs log folder>\exthost\GitHub.copilot-chat\GitHub Copilot Chat.log, showing the raw message sent to LLM... or if there is secret magic that you want to hide, at least show the part which contains all sent information out of variables.

Of course, this feature request doesn't remove the other one asking for a clearer UI to show what concrete variable data was send... but I believe that adding such logs would be very simple to provide in the meantime.

Extension version: 0.16.2024052801 VS Code version: Code - Insiders 1.90.0-insider (b9fa819edd893c590b28f3a09b75a81513e12fb9, 2024-05-24T17:21:19.268Z) OS version: Windows_NT x64 10.0.19045 Modes:

System Info |Item|Value| |---|---| |CPUs|13th Gen Intel(R) Core(TM) i9-13950HX (32 x 2419)| |GPU Status|2d_canvas: enabled
canvas_oop_rasterization: enabled_on
direct_rendering_display_compositor: disabled_off_ok
gpu_compositing: enabled
multiple_raster_threads: enabled_on
opengl: enabled_on
rasterization: enabled
raw_draw: disabled_off_ok
skia_graphite: disabled_off
video_decode: enabled
video_encode: enabled
vulkan: disabled_off
webgl: enabled
webgl2: enabled
webgpu: enabled| |Load (avg)|undefined| |Memory (System)|63.61GB (35.12GB free)| |Process Argv|C:\\Temp\\vscode-extension-samples\\chat-sample --crash-reporter-id ae3d8212-7e85-4d29-be44-058116e4fe47| |Screen Reader|no| |VM|0%|
A/B Experiments ``` vsliv368:30146709 vspor879:30202332 vspor708:30202333 vspor363:30204092 vscod805cf:30301675 vsaa593:30376534 py29gd2263:31024238 c4g48928:30535728 a9j8j154:30646983 962ge761:30841072 pythongtdpath:30726887 welcomedialog:30812478 pythonidxpt:30768918 pythonnoceb:30776497 asynctok:30898717 dsvsc013:30777762 dsvsc014:30777825 dsvsc015:30821418 pythontestfixt:30866404 pythonregdiag2:30926734 pythonmypyd1:30859725 pythoncet0:30859736 h48ei257:31000450 pythontbext0:30879054 accentitlementst:30870582 dsvsc016:30879898 dsvsc017:30880771 dsvsc018:30880772 cppperfnew:30980852 pythonait:30973460 showvideot:31016890 chatpanelt:31014475 bdiig495:31013172 a69g1124:31018687 dvdeprecation:31040973 pythonprt:31036556 dwnewjupytercf:31046870 nb_pri_only:31057983 26j00206:31048877 ```
roblourens commented 3 months ago

Unfortunately we can't expose the raw prompt sent to the LLM, but we do try to show info about used variables to the user. A variable should show up in the "used references" section if it was able to fit into the prompt. If it wasn't, eg if it was too large, then it shouldn't show up there.

If my variable produces 10 ChatVariableValue

The variable API is currently rough and not really ready for general use- currently the resolver returns an array of values, the intent was to offer different size options, but only the first value will ever be used.

frblondin commented 3 months ago

Thanks, very useful.

If it was too large, then it shouldn't show up there.

Can you define what 'too large' means? When using a variable, it is crucial to know why it would just not be used?

It was really not abvious to me that only first value was ever sent. I'd suggest to make it clear in the only documentation I know of here.

roblourens commented 3 months ago

Can you define what 'too large' means?

Meaning it was tokenized and didn't fit into the context window for the model being used. Ours are a bit smaller than what you would get from chatGPT right now.