marella / chatdocs

Chat with your documents offline using AI.
MIT License
684 stars 98 forks source link

Featurerequest: ConversationalRetrival and definition of chain_type #30

Open 94bb494nd41f opened 1 year ago

94bb494nd41f commented 1 year ago

I think conversationalRetrival would be a great feature on this awesome project. https://python.langchain.com/docs/modules/chains/popular/chat_vector_db Unfortunately i am not good enough to make it work, i dont know how to pass the "memory" around.

Additionaly i think some would profit from "threads" for the ctransformers as well as the chain_type or the search_type of the retriever.

marella commented 1 year ago

I will look into this. If the performance is good, I will make it as default, otherwise will add a config option to enable this. It will also solve #22

Ananderz commented 1 year ago

Looking forward to this @marella . Is this still on your to do?

marella commented 1 year ago

Hi, yes. I was out of station with a slow internet for the past few days, so the progress has slowed down. I will start looking into the pending issues next week.

Ananderz commented 1 year ago

Hi, yes. I was out of station with a slow internet for the past few days, so the progress has slowed down. I will start looking into the pending issues next week.

Great to have you back @marella

Ananderz commented 1 year ago

@marella been trying to implement this function on my own. I think I might almost be there and have it functional. The problem is that I can't get it to remember the prompt, only the answer of the prompt so far. Let me know if you want to see my changes.

marella commented 1 year ago

Great! Please share link to your repo/branch.

Ananderz commented 1 year ago

I did this @marella

Here is my chains.py:

from typing import Any, Callable, Dict, Optional

from langchain.chains import ConversationalRetrievalChain
from langchain.memory import ConversationBufferMemory

from .llms import get_llm
from .vectorstores import get_vectorstore

def get_retrieval_qa(
    config: Dict[str, Any],
    *,
    callback: Optional[Callable[[str], None]] = None,
) -> ConversationalRetrievalChain:
    db = get_vectorstore(config)
    retriever = db.as_retriever(**config["retriever"])
    llm = get_llm(config, callback=callback)
    memory=ConversationBufferMemory(memory_key="chat_history", return_messages=True, input_key="question", output_key="answer")
    return ConversationalRetrievalChain.from_llm(
        llm=llm,
        retriever=retriever,
        return_source_documents=True,
        memory=memory,
    )

In Index.html I changed this:

answer.innerText = res.result; --> answer.innerText = res.answer;

In ui.py I changed this: res["result"] = data["result"] --> res["answer"] = data["answer"]

Ananderz commented 1 year ago

It repeats the question before it gives an answer and then the repeated question is just removed