cline / cline

Autonomous coding agent right in your IDE, capable of creating/editing files, executing commands, using the browser, and more with your permission every step of the way.
https://marketplace.visualstudio.com/items?itemName=saoudrizwan.claude-dev
Apache License 2.0
11.92k stars 928 forks source link

Provide Full Updated Code #64

Closed CiberNin closed 3 months ago

CiberNin commented 3 months ago

I've noticed a common failure case is that Claude goes to edit a file but does not provide the full updated code. It's like "to be done" or even worse its discarding things because it puts "keep all functions below this"

Need to give stricter prompting to provide full updated code. Might even need to append to every prompt instead of just putting in system or tool instructions.

mkearl commented 3 months ago

I think this is a limitation of all LLMS.. it is like they are lazy or programmed specifically to abridge or only give changes vs being verbose and through. I struggle with this with all LLMS. I would like to find the ultimate prompt for this but have yet to find it.

DeletedByAccident commented 3 months ago

I've had this problem arise constantly today. I even start all my chats with the tool like "When providing your solution, provide a full code solution. Add to the existing code or update it. DO NOT Remove existing functionality. Do not delete existing code."

It will still offer to delete 500 lines of code and replacing it with one new function.

I tell it no, and ask it to provide a full solution. It acknowledges it, but still refuses to give the full solution.

This has definitely something to do with the length of the file contents, but it is surprisingly stubborn not wanting to output the whole thing. With smaller files, it works most of the time.

saoudrizwan commented 3 months ago

This is almost certainly due to your file being too large. I recommend changing your provider to Anthropic if it's not already, since it's able to output a max of 8192 tokens while OpenRouter/Bedrock are limited to 4096. Another thing you can try is breaking your file into multiple smaller files. In any case, I'm about to release v1.1.0 that has stricter instructions around not truncating files, so please give these things a try and let me know if you still run into issues.

DeletedByAccident commented 3 months ago

This is almost certainly due to your file being too large. I recommend changing your provider to Anthropic if it's not already, since it's able to output a max of 8192 tokens while OpenRouter/Bedrock are limited to 4096. Another thing you can try is breaking your file into multiple smaller files. In any case, I'm about to release v1.1.0 that has stricter instructions around not truncating files, so please give these things a try and let me know if you still run into issues.

Appreciate it. I will be testing these out today and reporting back.

CiberNin commented 3 months ago

When I'm using normal Claude I get good results from just adding "Provide full updated code" at the end of each prompt.

I think one issue with the agentic approach is that you can't go back in the conversation and re-prompt. LLMs, once they make a mistake a few times in a row, tend to get "stuck" because now they are in a state space where mistakes are part of the potential output.

It almost might make sense to have a recovery mechanism where you ask for a concise update on what went wrong, revert state to before the issue, and append the update to the prompt instead of just directly moving forward.

DeletedByAccident commented 3 months ago

When I'm using normal Claude I get good results from just adding "Provide full updated code" at the end of each prompt.

I think one issue with the agentic approach is that you can't go back in the conversation and re-prompt. LLMs, once they make a mistake a few times in a row, tend to get "stuck" because now they are in a state space where mistakes are part of the potential output.

It almost might make sense to have a recovery mechanism where you ask for a concise update on what went wrong, revert state to before the issue, and append the update to the prompt instead of just directly moving forward.

I am getting great results with Anthropic-Claude as well.

saoudrizwan commented 3 months ago

That is a great suggestion @CiberNin! Closing this ticket for now since I want to move all lazy coding related discussions to #14, please see this comment.

grabani commented 3 months ago

Hi ya - Comparing the behaviour of Claude-dev with both openAI and anthropic I see that their approach is, that as new lines are added to the text input box, the box increases in height but after a specific height it then brings up a scroll bar. Do you think that is a plausible feature to add.

Great work by the way :)