Closed sarhugo closed 11 years ago
Hm. Seems near impossible to me that nobody has ever run into this problem before. But it's definitely plausible.
Can you post a sequence of file system events to reproduce this problem?
What I believe it happened is that I modified the filter settings for the server and then it started to transfer all the items again. My mistake was to not clear the database, so maybe this is the cause of the inconsistency. I discovered the bug when I found that the arbitrator wasn't processing any files after some time running. When the pipeline queue is full of these "already synced files" the arbitrator falls in some kind of infinite loop (can't add any item because of capacity, but also don't remove any one). I'm not sure if there is a reason to keep these synced files in the pipeline or is safe to remove it.
Btw, excellent work
Hm, it's hard to help you if you cannot exactly define which steps you took to cause it in the first place :)
But it sounds like everything has been sorted out? Or do you still have an actual question?
I "solved" but I wanted to let u know the possible problem (even if I'm the only one :( ). The problematic items was near 450.. so I increased the MAX_FILES_IN_PIPELINE to 500 and now it process by 50 items without a problem. If the number of these synced files increases then I'll let u know. Otherway seems to be working fine.
Ok. Let me know if you can one day reproduce this :)
Hi, I think that there is a problem at process_filter_queue of arbitrator.py If the file is not beeing processed because it has been synced already to the server, the item is not removed from the pipeline queue. This prevents from adding more items to the pipeline queue at process_pipeline_queue
Increasing the MAX_FILES_IN_PIPELINE to a really high value is the only way to solve it.