snexus / llm-search

Querying local documents, powered by LLM
MIT License
479 stars 60 forks source link

have ability to ask follow-up questions that remember context of previous result #73

Closed Hisma closed 9 months ago

Hisma commented 9 months ago

This would be a very useful feature to have. I have noticed that with further testing (at least using gpt3.5), asking follow-up queries, it will not remember the context of the previous query, and instead give an "i dont know".

For example, I ask "what is the the ghg footprint of yellow pea protein?" and it will give an answer like "15.5kg".

Then I ask a follow-up question, "what about green pea protein?"

In normal chatgpt, and other llms, it would normally remember the context of the previous answer it gave, and follow up with the ghg footprint of green pea protein. But with this application, it will instead say "i don't know" to this follow-up question.

Is this something that can be easily added? It may be something to do with langchain and how we query the llm. Not really sure.

If it's something you can look into, that would be a very useful enhancement.

Thank you!

snexus commented 9 months ago

Thanks for the suggestion.

In the chat interaction, the limited context window of the model is filled with previous fragments of conversations, creating the illusion that the chatbot has a memory. In reality, you need to send the entire conversation back and forth.

RAG is different from regular chat. All the context window memory of the model is used to fill it with document extracts to ground the answer, rather than relying on previous conversations.

I am afraid it is not feasible with the current context window lengths, unless you are using expensive, extremely long context window models. While it will remember some of the previous requests, it will have less factual data to provide the answer.

It might become possible in the future as the context length of the models increases.

Hisma commented 9 months ago

thank you for the response. Not much we can do about this at this time then. It's not even a big deal. It's a nice to have, but not really something that can't be mitigated with prompting. I'll close this for now.