This is not a bug report, this is strange thing I found during using DB-GPT.
I found that during the chatting, DB-GPT seems to look up related data from the table "chat_history_message"
Because when this table is full of good chatting or bad chatting, the new chatting result 's quality is huge different.
So would the author explain a little bit about the code logic, if in the chatting process, the data in the table "chat_history_message" could be referred as knowledge data to combine with prompt to ask LLM? And what is the detailed process?
Thanks
Search before asking
Operating system information
Linux
Python version information
DB-GPT version
main
Related scenes
Installation Information
[ ] Installation From Source
[ ] Docker Installation
[ ] Docker Compose Installation
[ ] Cluster Installation
[ ] AutoDL Image
[ ] Other
Device information
no problem here
Models information
no problem here
What happened
This is not a bug report, this is strange thing I found during using DB-GPT. I found that during the chatting, DB-GPT seems to look up related data from the table "chat_history_message" Because when this table is full of good chatting or bad chatting, the new chatting result 's quality is huge different. So would the author explain a little bit about the code logic, if in the chatting process, the data in the table "chat_history_message" could be referred as knowledge data to combine with prompt to ask LLM? And what is the detailed process? Thanks
What you expected to happen
no problem here
How to reproduce
no problem here
Additional context
no problem here
Are you willing to submit PR?