continuedev / continue

⏩ Continue is the leading open-source AI code assistant. You can connect any models and any context to build custom autocomplete and chat experiences inside VS Code and JetBrains
https://docs.continue.dev/
Apache License 2.0
19.25k stars 1.66k forks source link

Input prompts are truncated #2901

Closed xldistance closed 2 days ago

xldistance commented 3 days ago

Before submitting your bug report

Relevant environment info

- OS:windows 11
- Continue version:0.8.55
- IDE version:
- Model:Rombos-Qwen2.5-Coder:32b

Description

Use ctrl+l to select a longer code, then enter a longer cue word the entire cue word will be truncated, enter a shorter cue word the entire cue word is not truncated Longer prompts:请对以下代码进行重构,以提高其简洁性、效率和逻辑优化。请遵循以下步骤:1. 提供重构后的完整代码,使用markdown格式。2. 详细说明重构后的代码与原始代码的区别,包括:结构改进、逻辑优化、效率提升、代码简化。3. 解释每项重要修改的原因和好处。4. 确保回答简洁明了,避免重复或无关内容。请使用简体中文回答。以下是需要重构的代码:\n\n{{{ input }}} Shorter prompts:重构以上代码 QQ20241113-181830 QQ20241113-182716

To reproduce

No response

Log output

No response

xldistance commented 3 days ago

I found that when role is system, the prompt is normal, but when role is user, the prompt is truncated.

xldistance commented 3 days ago

Before set maxTokens for 12000, contextLength for 13000 results of the prompt word was truncated, now set maxTokens for 12000, contextLength for 32000 prompt word is normal ......

sestinj commented 2 days ago

This is largely intentional behavior—we are forced to truncate something if the context goes over the limit, otherwise the API will just error out. We have to add maxTokens to the context length calculation because this is what all LLM APIs do in their calculation

sestinj commented 2 days ago

I made an adjustment here so that we can warn users when this might happen: https://github.com/continuedev/continue/pull/2937/commits/b614c43f8dc791e5a1358dfa77c0de0cc9d845bf