CsabaConsulting / ThruThinkCohereWeaviateChat

Cohere and Weaviate powered ThruThink support chat on Streamlit
https://thruthinksupport.streamlit.app/
MIT License
1 stars 3 forks source link

Follow-up questions trick the fusion #5

Open MrCsabaToth opened 9 months ago

MrCsabaToth commented 9 months ago

Example:

Q: "What do you know about ThruThink Deal Score?" A: "ThruThink Deal Score is a proprietary feature1.1 that evaluates multiple factors1.1 involving historical information, the transaction itself and the projections1.1 which result in an overall grade or score for the "Deal"1.1. It is an optional feature1.2, that is turned off by default1.2 within the ThruThink web application. It must be manually turned on by the user1.2 in the Workflow menu, Step 71.2, by clicking the button for ThruThink Evaluation inputs1.2 and choosing "Yes"1.2. The ThruThink Deal Score provides an "A", "B", "C", "D" or "F"1.2 rating of the Deal with "A"1.2 being the highest rating. The rating evaluates the Deal or comparative benefit to the Owner as presented by the User Inputs.1.2 Would you like to know more about the ThruThink Evaluation?" Q: "Yes I would like"

Currently we generate variations for the current question. The variation generator LLM doesn't have the context of the co.chat. We'll need to work on this.

MrCsabaToth commented 9 months ago

Here's the way this could be tackled: Cohere's Coral co.chat is able to return the chat history, see return_chat_history=True in "Building the Chat History" in https://txt.cohere.com/chatbot-chat-endpoint/. We can also remember the history, but we can rely on return_chat_history=True. Once we have the history we have two options:

  1. Supply the history along with the latest question to the variation generation. This could be sub-optimal from vector embedding perspective: each back-and-forth in the conversation can have it's on vector in the latent embedding space, and if we present the whole history it will be hard to come up with a good singular compound vector for a whole chat.
  2. We can insert an extra step before the variation generation. This step would resolve the latest question in case it'd contain any references to the context. Once resolved we could use the existing variation generation prompt. The context resolution might be somewhat similar task to a summarization? Resolution prompt could be something like this: "You are presented with a conversation and a follow-up query. The follow-up query might contain some references to conversation. Resolve any references in the follow-up query, so if it can be stand-alone and wholesome by itself without the context. Don't provide any explanations or other questions, just present the resolved form of the follow-up query."
MrCsabaToth commented 9 months ago

Also note that even right now we already store the history, so we possibly don't need co.chat's return_chat_history=True

MrCsabaToth commented 8 months ago

I'm trying to insert a resolution step to conclude to a stand-alone question lacking any contextual references, but the models not able to handle it yet. https://www.reddit.com/r/PromptEngineering/comments/186aqhg/looking_for_a_prompt_which_can_resolve_any/