Closed Randname666 closed 1 month ago
This problem is due to the fact that you have generated dirty data as a result of multiple reboots, however it does not affect the operation,you can ignore this problem.
This problem is due to the fact that you have generated dirty data as a result of multiple reboots, however it does not affect the operation,you can ignore this problem.
But unfortunately, that one problematic task is clogging up all other new tasks dispatched. It simply goes away by waiting?
I ended up purging all the volumes of docker used by RagFlow. That fixed the issue, but of course with that, all the documents are gone which is definitely not a thing to perform if there are already a lot of documents processed in it.
I have the same problem.
I solve the problem by deleting data in Redis Finally.
import redis r = redis.Redis(host="0.0.0.0",port=6379,password='infini_rag_flow') keys = r.keys('*') # keys are [b"rag_flow_svr_queue"] obj = r.delete('rag_flow_svr_queue')
After deleting the data, the parsing process works well.
Describe your problem
A problematic task (how it's generated is unknown) is clogging up all other new tasks dispatched including non-PDF ones. The problematic task is nowhere to be found to be canceled in the WebUI. Currently the backend is giving out such errors constantly:
docker compose down
thendocker compose up
doesn't resolve the issue.Is there a way to manually remove this problematic task? Additionally, is there a mechanism for task purging/canceling on error internally ?