AI Native Data App Development framework with AWEL(Agentic Workflow Expression Language) and Agents
13.85k
stars
1.86k
forks
source link
[Bug] [Module Name] Bug title LLM test using prompt page does not get complete results #2142
Open
adogwangwang opened 1 week ago
Search before asking
Operating system information
Linux
Python version information
DB-GPT version
main
Related scenes
Installation Information
[ ] Installation From Source
[X] Docker Installation
[ ] Docker Compose Installation
[ ] Cluster Installation
[ ] AutoDL Image
[ ] Other
Device information
GPU V100
Models information
LLM: qwen2.5-72b
What happened
当我使用prompt时,输入,点击LLM测试之后,发现后台会输入一个很完整的回答,但是在prompt界面当中只能看到第一句,很奇怪,请教各位老师是什么原因?结果如图,黑色是我后台输出的一部分,远比LLM OUT呈现的内容多的多
请问在prompt界面中右下角的输出验证是什么意思?点击之后会在LLM OUT中出现红色的提示,‘当前场景没有找到可用的Prompt模版,chat_with_db_qa‘
What you expected to happen
请老师解答我这两个问题
How to reproduce
连接数据库执行即可
Additional context
No response
Are you willing to submit PR?