Open jaju opened 8 years ago
That shouldn't be possible, it seems like a bug. Does this happen often?
Yes, it does happen often.
Here's some detail of how I am using DQ. All actors working concurrently. Before, and after, the following block, there's only a single thread of execution, which pre-populates the first queue with tasks. [DQ::some-in-q] --multiple-consumers--> [DQ::some-out-q] --single-consumer-to-drain-writing-to-a-file-as-tasks-come.
Now that you say it is unexpected, I have a question - should "put!"s be protected with a lock? I've been having some troubles, so I introduced locking (around interval-task-seq calls) and saw some improvement - I'm sorry though, I didn't do a thorough job at investigating and recording. I'm going to try locking around the put!s too and re-run a few times.
If you'd like me to try any specific steps, do let me know.
A few more observations.
Note: The data I am processing is being retrieved over the net (HTTP calls), and I store the raw responses in another file. For the garbled content, I did cross-check the HTTP responses and the data in the raw form appeared to be right. That's my reference point.
Well, in theory locking shouldn't be necessary on your end. I'll take a look.
Could be related to https://github.com/Factual/durable-queue/issues/16, I observed the same things.
[Question] Here's one I see in my process. "out" {:enqueued 8332, :retried 0, :completed 8333, :in-progress -1}
I'm unsure how to handle this, as I depend on the values to decide how to progress (completion marking etc.)
Thanks!