twitter-archive / kestrel

simple, distributed message queue system (inactive)
http://twitter.github.io/kestrel
Other
2.78k stars 313 forks source link

Frequent journal rewrites with large numbers of open transactions #114

Open aniketschneider opened 11 years ago

aniketschneider commented 11 years ago

We are experiencing an issue where Kestrel performance starts to degrade when we allow too many open transactions on a queue. The degradation is accompanied by very high disk i/o, and seems to occur primarily when NOT in read-behind mode. We are running Kestrel 2.2.0

I believe what is happening is as follows:

  1. A large number of transactions are opened, until the queue size hits 0, triggering a journal rewrite since the journal is larger than the defaultJournalSize (in our case 16MB).
  2. The rewritten journal file, due to the large number of open transactions, is still larger than defaultJournalSize.
  3. After a single enqueue/dequeue, the journal rewrite is immediately triggered again.

I have read the bug fixed in 2.4.1 and I don't believe our setup falls under those criteria - our items are at on the order of at least 0.5-1k, and we have a 2:1 ratio between maxJournalSize and maxMemorySize.

technoweenie commented 11 years ago

We're seeing similar issues. The logs look like this:

INF [20130508-11:42:49.780] kestrel: Rewriting journal file for 'booya' (qsize=0)
INF [20130508-11:42:50.375] kestrel: Rewriting journal file for 'booya' (qsize=0)
INF [20130508-11:42:51.004] kestrel: Rewriting journal file for 'booya' (qsize=0)
INF [20130508-11:42:51.448] kestrel: Rewriting journal file for 'booya' (qsize=0)

(with a lot more entries every second until the event is over)

Here's the open transactions from collectd:

github_ the technonunes room-1

We do have 16 workers across 6 nodes. So I wonder if we're getting close to the open transaction limit.

The collectd graph for expired items is flat, so that doesn't seem to be an issue.

Ideas:

I'm just worried that increasing the journal size will make this problem occur less frequently, but longer.

EDIT: We're on Kestrel 2.4.1.

technoweenie commented 11 years ago

We fixed this for ourselves with two things: