Open syntex01 opened 1 year ago
I noticed that as well, do you think that this could be solved via a more detailed and specific prompt for writing the KB article?
Yes for sure. We have to rework the progress too. I would propose the following idea: for all KB articles that are related to the summary that was created from the chats in the sliding window, we ask gpt individually what additional information it learned regarding that topic, we then just add the information to the KB article. Later on we can remove duplicate information or solve contradictions when we do an reforming event. After that we have to find out what additional topics that are not yet a KB article are mentioned in the summary. After that we again ask for what information can be gained from the summary regarding each article. Also, we have to improve the summaries by providing better instructions to gpt. More importantly we have to improve the knowledge extraction from the summary. Both can be achieved by providing multiple exact examples to our prompt, and improving our description of the task. I will start implementing tomorrow.
Yes few shot examples really helped in other projects of mine, especially with gpt 3.5.
for all KB articles that are related to the summary that was created from the chats in the sliding window, we ask gpt individually what additional information it learned regarding that topic, we then just add the information to the KB article.
Right now, I choose the most similar KB article to the summary. I just choose one KB article currently (simple max function), because I’m worried it will be slow/use to many tokens to do more than one. And then I ask chatgpt if the topic of the summary is relevent or not (this prompt could be improved).
I agree with you that we need a better way to update existing KB articles. Like you said we could ask chatgpt in the same classification prompt if there is any new information that the summary has that we could add to the KB article.
Some ideas I have right now:
Currently the classification prompt is written like this:
@chat_gpt_prompt
def _llm_classification(self, summary_node, knowledge_node):
"""
This method is responsible for classifying a summary node as either
a new knowledge node or an existing knowledge node.
"""
prompt = (
f"Given the following summary (X):\n\n{summary_node.content}\n\n"
f"and the following text (Y):\n\n{knowledge_node.content}\n\n"
"Please classify whether the summary is similar or distinct to the text. If Y has a title, please compare the summary to the title.\n\n"
"If the summary has a different/distinct topic to the provided text, please answer with `<no>`\n\n"
"If the summary is similar to the provided text, please answer with `<yes>`\n\n"
)
return prompt
It simply responds with <yes>
or <no>
. But what if it responded like this instead
{
"has_same_topic": "true",
"new_information": "A new person started helping Bob work on his project called Syntex",
}
Lemme know what you think
I think your idea is good. But it think it is necessary, that we transfer all knowledge that can be gained from the chats to KB articles. Transfering it to only one, leads to unseparated articles or to lost information. It is also important that kB articles remain short and concise. So that we don't use to many tokens, but when we only use gpt3.5 the tokens are not that problematic. Another idea could be, that we create as many new kB articles as needed from the summary, and after we can investigate if the articles should be merged with existing ones, by asking gpt if the nearest kb article is about the same topic. This would have the advantage that we first extract all the knowledge into multiple KB article and then only have to worry about merging well into existing articles. This could then be done via the idea you provided. I have access to gpt4 btw. If we want to test it with that too. Also the rolling window has to be the same sice as the max message length of the chatbot so that no information is used or summarized twice. I will in the future also change it from message amount to tokens used.
I think that it is important that kb articles remain short so that we do not provide unnecessary information. It might also be necessary to provide multiple kb articles at once during a query. Is the KB article that is relevant to a user prompt currently only readable for the chatbot for one answer? Or is it also added to the messages? That would actually be bad, because the same article would be added again and again if the user keeps talking about the same topic.
Is the KB article that is relevant to a user prompt currently only readable for the chatbot for one answer?
Every time the user sends a message, a kb article gets inserted into the chat context. It’s true that if the user is continually talking about the same thing, the same article will keep getting inserted.
A possible fix may be to check if the same KB article is already in the agent.messages
list, if it is don’t insert the kb article, otherwise insert it. That, way it wont repeat.
As a side note: we now know that bing uses GPT-4. And i’m sure you know that bing searches the web by itself (I think it decides to search when necessary). I’m thinking of trying to implement this pattern with this project, letting the agent search it’s own memories. I had some success augmenting chat-gpt with tools in the past, so It should be possible.
I think your idea is good. But it think it is necessary, that we transfer all knowledge that can be gained from the chats to KB articles. Transfering it to only one, leads to unseparated articles or to lost information. It is also important that kB articles remain short and concise. So that we don't use to many tokens, but when we only use gpt3.5 the tokens are not that problematic. Another idea could be, that we create as many new kB articles as needed from the summary, and after we can investigate if the articles should be merged with existing ones, by asking gpt if the nearest kb article is about the same topic. This would have the advantage that we first extract all the knowledge into multiple KB article and then only have to worry about merging well into existing articles. This could then be done via the idea you provided. I have access to gpt4 btw. If we want to test it with that too. Also the rolling window has to be the same sice as the max message length of the chatbot so that no information is used or summarized twice. I will in the future also change it from message amount to tokens used.
Ok, if that is our best/most robust option in the long run, it might be wise to do that process in parallel or in the background in a new thread (or use asyncio) to keep response times low for users
I thought about that too. My Idea was, that the chatbot could also think to itself between messages. So when you ask a question, the chatbot actively thinks about what additional information it would need to answer the question if any.user: "what's my name?" Internal thought:"for this I would need to know more about the user" memory:"[kB article about the user]" assistant:"your name is:..."
Then the chatbot could also decide when it needed multiple or no kB articles at all.
And we should for sure implement the kB article generation in a separate thread. Also way later, we can enable the chatbot to just run with internal thought and give him access to the internet to create new KB articles.
Yep, chain of thought reasoning improves performance pretty well, good idea!
you use the @chat_gpt_prompt decorator to convert the prompt output of the function to the response to that prompt. how can I set a system message, for the chatbot that converts that prompt to an output?
I did a lot of changes to the memory, it should now basically do what we talked about. Has to be tested more, and we have to add examples to the system prompt when using the decorator. We also have to create the internal chat and possibly provide multiple bk articles. we also have to solve the problem of the same article being provided multiple times. That should not be hard, because i also implemented article names now. We could just remember the names of the articles we provided in the window the chatbot still remembers
Can you merge my two requests then?
I is not working as intended, but the general structure should be there. can you have a look at it?
Providing examples via system prompts should improve the situation. I also did not add documentation yet
you use the @chat_gpt_prompt decorator to convert the prompt output of the function to the response to that prompt. how can I set a system message, for the chatbot that converts that prompt to an output?
I think we can create a new decorator
@chat_gpt_kshot
which will require you to return a list of messages instead like [{role: content}, …]
I’ll implement this after I merge your changes, please read my comments on the pull request :)
I improved a few things; you can look at the code. the pull request should automatically use my newest code, right? From my testing, it works fundamentally now. I think we should work on the internal chat now. I already did the system prompt thing
Ok i made a few fixes. I tried using it with gpt-3.5 and in my experience it works well, except that the knowledge base articles are super verbose, and it’s creating more topic than necessary currently.
In general good to hear. Would be preferable to use 3.5 as much as possible. Gpt4 was a bit to verbose to at some locations. We have to reformulate instructions and examples a bit. Possibly we should not call it KB article Infront of the agent. But I think this problem is easily manageable. And I think it's good if it creates more articles, that means more specific information. And when updating its not more expensive, because we only updated the n nearest articles. What changes did you do.
It is very important that we prevent it from hallucinating additional information, that was not mentioned. Gpt4 is way more reliable in that context.
I fixed a few bugs, like for example we didnt save the topic before, thats why it was creating extra topics. The topic system seems to be working well.
But the knowledge base articles are a bit too verbose, and sometimes contain hallucination without strictly relying on provided information. This was based on my testing with gpt-3.5 because (i dont have gpt-4)
Did you already apply for gpt4? It did not take long for me. We can just modify the instructions and examples to solve the verbosity problem. With gpt4 it will work for sure with gpt3.5 we have to try. Gpt4 reacts very well to instructions in my testing.
From my first testing, they are just summaries and don't represent individual ideas or people. They should contain concise information of what was learned about the topic during the chat.