Closed xldistance closed 2 days ago
I found that when role is system, the prompt is normal, but when role is user, the prompt is truncated.
Before set maxTokens for 12000, contextLength for 13000 results of the prompt word was truncated, now set maxTokens for 12000, contextLength for 32000 prompt word is normal ......
This is largely intentional behavior—we are forced to truncate something if the context goes over the limit, otherwise the API will just error out. We have to add maxTokens to the context length calculation because this is what all LLM APIs do in their calculation
I made an adjustment here so that we can warn users when this might happen: https://github.com/continuedev/continue/pull/2937/commits/b614c43f8dc791e5a1358dfa77c0de0cc9d845bf
Before submitting your bug report
Relevant environment info
Description
Use ctrl+l to select a longer code, then enter a longer cue word the entire cue word will be truncated, enter a shorter cue word the entire cue word is not truncated Longer prompts:请对以下代码进行重构,以提高其简洁性、效率和逻辑优化。请遵循以下步骤:1. 提供重构后的完整代码,使用markdown格式。2. 详细说明重构后的代码与原始代码的区别,包括:结构改进、逻辑优化、效率提升、代码简化。3. 解释每项重要修改的原因和好处。4. 确保回答简洁明了,避免重复或无关内容。请使用简体中文回答。以下是需要重构的代码:\n\n{{{ input }}} Shorter prompts:重构以上代码
To reproduce
No response
Log output
No response