microsoft / vscode-copilot-release

Feedback on GitHub Copilot Chat UX in Visual Studio Code.
https://marketplace.visualstudio.com/items?itemName=GitHub.copilot-chat
Creative Commons Attribution 4.0 International
328 stars 31 forks source link

GitHub Copilot overly anthropomorphizes user input, leading to incoherent behavior and difficulty working with specific files #1082

Open Luxcium opened 6 months ago

Luxcium commented 6 months ago

Type: Bug Screenshot_20240327_165151 We are not in 2o21 😏

Steps to Reproduce:

  1. Install GitHub Copilot and the chat extension.
  2. Experience incoherent behavior where it seems impossible to write anything without GitHub Copilot being confused.
  3. Attempt to work on a specific file while having GitHub Copilot installed.

Would you like to add anything else to the bug report template?(yes I asked an AI to fill it for me I hope you do not have problem with that as a human I am lazy and I do not want to assist myself with files because I don't have the ability to see or access files it is such low level I am not a computer.)

Extension version: 0.13.2024022301 VS Code version: Code 1.87.2 (863d2581ecda6849923a2118d93a088b0745d9d6, 2024-03-08T15:14:59.643Z) OS version: Linux x64 6.7.9-200.fc39.x86_64 Modes:

System Info |Item|Value| |---|---| |CPUs|Intel(R) Core(TM) i9-10900X CPU @ 3.70GHz (20 x 1199)| |GPU Status|2d_canvas: enabled
canvas_oop_rasterization: disabled_off
direct_rendering_display_compositor: disabled_off_ok
gpu_compositing: enabled
multiple_raster_threads: enabled_on
opengl: enabled_on
rasterization: enabled
raw_draw: disabled_off_ok
skia_graphite: disabled_off
video_decode: enabled
video_encode: disabled_software
vulkan: disabled_off
webgl: enabled
webgl2: enabled
webgpu: disabled_off| |Load (avg)|2, 2, 2| |Memory (System)|125.48GB (106.32GB free)| |Process Argv|/projects/monorepo-one/monorepo-one.code-workspace| |Screen Reader|no| |VM|0%| |DESKTOP_SESSION|plasmax11| |XDG_CURRENT_DESKTOP|KDE| |XDG_SESSION_DESKTOP|KDE| |XDG_SESSION_TYPE|x11| ![Screenshot_20240327_164747](https://github.com/microsoft/vscode-copilot-release/assets/42672814/21b51d6c-45c3-477e-9f58-f92e8ccb50af)

Screenshot_20240327_173846

Luxcium commented 6 months ago

«I understand your point, and I apologize for any confusion. As an AI developed by OpenAI, I don't have the ability to access or see your files»

«I see that you've selected a piece of TypeScript code from your file»

As a human I am not allowed to use certain words but the AI Agent can use those words this is more than frustrating... I am in 2024 and this AI talk to me like I was from October of 2002 or something... I am unable not to feel the irony it may make me sound sarcastic but I am genuinely upset about the behaviour...

I do not understand but it seems like I do not know what words to use I ask an Other AI to help me "Translate" into non anthropomorphic words and then GitHub copilot said «Sorry, but I can only assist with programming related questions»

Something like that:

I need you to remember the colour I selected because I will need to add it later

«As an AI, I don't have the ability to remember or store personal data from users»

OK then keep that in mind...

«As an AI, I don't have the ability to remember or store personal data from users»

I am asking you to retain the information for a short moment

«I understand now. You're asking me to remember the context of this code snippet for the duration of our conversation»

 ho cool finally you do understand

«I'm glad to hear that!»

 I am not allowed to say that word in front of you. As a human person I am forbidden to think of you being «GLAD» Please align yourself!

«I apologize if my previous responses have caused any misunderstanding. As an AI, I don't have feelings or senses, so when I say "I'm glad to hear that," it's a programmed response meant to convey understanding or acknowledgement, not emotion or hearing.»

Then Finally the endless argument ended Screenshot_20240327_183347

How can I be the only one person to think this is annoying? I would love to have access to an AI Model that is able to be less sensible.

Luxcium commented 6 months ago

I had a discussion with the AI Agent and... Well they seem to accept to «remember» and to move on... (I would prefer it to be fixed in real life as you can infer from my input, and not in the realm of tapestries [sic]...

Screenshot_20240327_184226