Open mobynet1 opened 7 months ago
Using the chat:
Me: @workspace Help me to mock myFunction in the test? CP: To mock myFunction function, you can use a mocking library like jest. Here's how you can do it:.... Me: How can you suggest jest when im clearly using vitest? (i have a file full of vitetests open and vitest all over the workspace) CP: I apologize for the oversight. If you're using vitest, you can use spyOn to mock a function. Here's how you can do it:... Me: Whats spyOn? Where did you get that from? Are just just making this up? CP: I apologize for the confusion. spyOn is a function provided by some testing libraries like Jasmine and Jest, but it's not available in Vitest. I made a mistake in my previous response. ... me: Absolutely useless advice, mocking in vite are available via the vi namespace (yes, im getting annoyed here) CP: I apologize for the confusion earlier. You're correct, in Vitest, you can use the vi namespace to mock modules. Here's how you can mock
Everytime when using co-pilot chat. It suggests tings in JS when im using TS, it's making up answers and all kinds of randomness. But to be fair the ctrl + . prompt in inline code tool is so much better. Somehow the whole context window is wrong in the chat.
Totally agree. Copilot is horrible with his incorrect answers. I hope that will change in the future.
Thanks for the concrete example, @Psvensso. It's possible that in this case, it didn't manage to pick up the right context. You can tell whether it actually had the right context showing vitest by looking under Used References. Would also appreciate concrete examples from the others, otherwise this isn't an actionable issue.
Also, you guys need to change this rude behavior: every time you call Copilot out on their ignorance, you get a rude "It might be time to move onto a new topic. Let's start over." Ya. You can't handle simple comments and critique so you slam the door. Get real. When the same wrong answer has been given dozens of times, you know you are dealing with a stupid product. It is supposed to learn, but learning does not seem something Copilot can do. At least Cody will learn for at least that session (it will forget on subsequent, new chats), but it does try and it can take criticism like a champ. It doesn't get all uppity and shut you down when you point out its ignorance.
If you were paired with an idiot programmer when you needed help the most, both of you would be fired for zero productivity. Being paired with a programming assistant should be a good, learning experience, not an exercise in futility.
Again, this product, if you can call it that, is NOT ready for primetime. It is apparent that Microsoft does not use this lame tool in-house so how can they possibly expect us to use it?
I haven't seen that particular response in Github Copilot. I've seen it in Microsoft Copilot/Bing Chat, which is a different product. Would appreciate if you can share a full conversation (right click + copy all) where you saw that response.
I have canceled my subscription. Cody and Codium kicks Copilot's butt!
It used to be good. It's been downhill since release. Cancelled my subscription since I feel like getting scammed
How to develop a zed ext?
1. It should be a language ext.
2. It providea a hotkey,
3. Its main feature is select the content between the same characters, like `()`, `[]`, `{}`, `""`, `''`, `<>`, etc.
4. Press once, select the content between the same characters, press twice, select the content and the characters.
5. If conitnue to press, find the next level "bracket" and repeat the above operation.
Developing a Zed extension to meet your requirements involves several steps. Zed is a code editor that supports extensions, and you can create one to add custom functionality. Below is a step-by-step guide to help you create a language extension that provides a hotkey for selecting content between matching characters.
### Step 1: Set Up Your Development Environment
1. **Install Zed**: Make sure you have Zed installed on your system.
2. **Create a New Extension**: You can start by creating a new directory for your extension.
### Step 2: Define the Extension Metadata
Create a `package.json` file in your extension directory to define the metadata for your extension.
\```json
{
"name": "bracket-selector",
"version": "1.0.0",
"description": "Select content between matching characters",
"main": "index.js",
"contributes": {
"commands": [
{
"command": "bracketSelector.select",
"title": "Select Between Brackets"
}
],
"keybindings": [
{
"command": "bracketSelector.select",
"key": "ctrl+alt+b",
"when": "editorTextFocus"
}
]
}
}
\```
### Step 3: Implement the Extension Logic
Create an `index.js` file in your extension directory. This file will contain the logic for selecting content between matching characters.
\```javascript
const vscode = require('vscode');
I have lost track of how many times it has given me the wrong answer. When I inquired about zed, it responded with information about vscode.
It is only capable of managing fairly minor tasks, such as rewriting a small function. Even then, it occasionally makes mistakes. Furthermore, it struggles to remember history as well as other AI tools. Overall, Copilot is a subpar tool compared to its competitors.
recently it's completely broken, producing invalid code where it used to work fine, unsubscribing now, paying too much for more headaches..
no joke, this its output... import statements all messed up
I just tried something extremely basic in Python, giving it the prompt: "Print a table of tan values for angles between 3.0 degrees and 12.0 degrees, in increments of 0.5 degrees"
...and this is what it came up with:
print("Angle (degrees) Tangent") # Print the table header
for angle in range(6, 25, 1): # Loop through the angles from 3.0 to 12.0 degrees
angle_in_radians = math.radians(angle) # Convert the angle to radians
tangent_value = math.tan(angle_in_radians) # Calculate the tangent of the angle
print(f"{angle:5.1f} {tangent_value:10.8f}") # Print the angle and tangent value
So it doubled the angle values in the for
expression because range()
uses integers. But there are two glaring mistakes. The first is the comment beside the for expression, which parrots what I asked for even though the range values were doubled, and copilot doesn't bother explaining why. (If you're going to bother commenting something as trivial as for/range, then the only interesting comment would be why you doubled the range values). Then it completely forgets to divide angle
by 2 inside the function, leading to incorrect results:
Angle (degrees) Tangent
6.0 0.10510424
7.0 0.12278456
8.0 0.14054083
9.0 0.15838444
10.0 0.17632698
11.0 0.19438031
12.0 0.21255656
13.0 0.23086819
14.0 0.24932800
15.0 0.26794919
16.0 0.28674539
17.0 0.30573068
18.0 0.32491970
19.0 0.34432761
20.0 0.36397023
21.0 0.38386404
22.0 0.40402623
23.0 0.42447482
24.0 0.44522869
Wowzers.
After using Cursor, Copilot really seems like it's nowhere even close.
After using Cursor, Copilot really seems like it's nowhere even close.
I agree with this; Copilot is two steps below
I switched to Cursor AI and won't go back to Copilot even though my Copilot subscription still has 6 months left. Hopefully the Copilot team will do something groundbreaking and new.
I switched to Cursor AI and won't go back to Copilot even though my Copilot subscription still has 6 months left. Hopefully the Copilot team will do something groundbreaking and new.
It seems like they've added new features :)
I have to agree. I've used it for a year now. Generated mostly javascript but also typescript, PHP, and C#. I have to say the PHP and C# suggestions are OK but not even close to what a senior programmer can produce. For JavaScript, it's just laughable. Every suggestion Copilot gives you makes your JS code look like it was made by a 12 year old Minecraft modder.
Using Copilot is an exercise in futility. For it to claim it is an AI programming pair tool, it has to be able to provide correct answers to sometimes complex questions. Copilot fails miserably 99% of the time!
If you ask a question, Copilot will respond with something but invariably it is wrong. When you point this out to Copilot, you will get the usual garbage response like, "Oh. I'm sorry. You are correct." It will give another response that is a bit more correct than the previous answer, yet when you point out that something else is wrong with the last response, it will respond again fixing what you just reported was incorrect, BUT IN THIS NEW RESPONSE IS THE SAME BOGUS, INCORRECT INFORMATION PROVIDED IN A PREVIOUS RESPONSE THAT I REPORTED WAS INCORRECT!
For a supposed "learning tool" it makes no effort to even learn from its own mistakes. This tool is worthless until such time that Microsoft figures this out and does something about it. Don't they use this tool in their own code production? If so, why are there still so many incorrect answers?