Closed tbykowsk closed 7 hours ago
Hi @tbykowsk,
Thanks for raising this issue. For the first issue, I fixed it by PR https://github.com/opea-project/GenAIExamples/pull/88. I'm currently attempting to reproduce the second issue. I'll keep you updated on my progress.
Hi @tbykowsk,
Currently, our knowledge base creation process lacks an appending strategy. Whenever a file is uploaded, a new knowledge base ID is generated. Similarly, when a html link is pasted, it also generates a new knowledge base ID. As a result, the frontend utilizes the most recent knowledge base ID as the default for chatting. Consequently, if a question pertains to the uploaded file, the system searches only within the knowledge base associated with the uploaded link.
To enhance this process, we plan to refine the code and implement a knowledge appending strategy. This strategy will involve appending new knowledge to existing knowledge bases rather than generating new IDs each time.
Hi @lvliang-intel, thank you for such a quick response!
I can confirm that the first issue is resolved.
Thank you for the details about the knowledge base implementation. Maybe information how it functions could be added to Readme, so it is clear what to expect.
@lvliang-intel, are you able to say when do you plan to implement a knowledge appending strategy?
@lvliang-intel, I am following the steps in the README, and after I successfully build the Docker container and set up my Hugging Face token, I encounter an issue that I don't know how to resolve. Can you guide me?
curl -v --noproxy '*' 127.0.0.1:8080/generate -X POST -d '{"inputs":"What is Deep Learning?","parameters":{"max_new_tokens":32}}' -H 'Content-Type: application/json' Note: Unnecessary use of -X or --request, POST is already inferred.
POST /generate HTTP/1.1 Host: 127.0.0.1:8080 User-Agent: curl/7.68.0 Accept: / Content-Type: application/json Content-Length: 70
@tbykowsk Please let us know if this issue is fixed and can be closed
@tbykowsk Since the dataprep microservice supports the knowledge base append strategy now, we will close this issue. Please let us know if there is other issues.
Hi, I followed ChatQnA Application readme and encountered some problems. I started with code from the master branch.
This error causes that the Frontend Service is unresponsive, because it connects to
/v1/rag/chat_stream
endpoint which is broken.When I reverted
app/server.py
to this commit the streaming endpoint started to work. It would be useful to add information to the instruction about a commit/release it was validated with.Please upload your local file or paste a remote file link, and Chat will respond based on the content of the uploaded file.
In the log new data sources are ingested correctly:However, new data sources do not seem to be instantly included during response generation. A restart of
app/server.py
is required for new information to be available, even though in the backend an index changes with each uploaded document, and is reloaded with each response, example of log:[rag - reload retriever] reload with index: rag-rediskb_147637c0
. To confirm this problem, I have used a workingapp/server.py
from the aforementioned commit, the rest of code was from the master branch.Please look into those issues. Thanks!