YahooArchive / oozie

Oozie - workflow engine for Hadoop
http://yahoo.github.com/oozie/
Apache License 2.0
373 stars 160 forks source link

Redesign oozie internal queue #561

Open mislam77-zz opened 13 years ago

mislam77-zz commented 13 years ago

We had a lot of issues related to oozie internal queue. It includes queue overflow as well as re-queuing the same overly used commands to avoid starvation. There are other situations too. This problem becomes very obvious in very high-load case.

I would like to open-up the discussion to find out a better architectural design for longer term considering a very high-load situation.

The following proposals are to initiate the discussion that varied from complete overhaul to adjusting the current design:

  1. Implement the queue idea into DB: Pros: Persistence. In hot-hot or load balancing situation it useful. Single place of truth. Different level of ordering could be done as needed through SQL. Don't bother about queue size. Don't need to recreate in every restart -- recovery service might be less busy.

    Cons: Extra DB access overhead.

    Middle approach could be to keep a memory cache with strict conditions. The details could be discussed later.

  2. Re-queuing the same commands (that is used for throttling) -- should be redesigned. In this case, make sure queuing happens in the same place -- not at the end of queue. I know this will break the queue meaning. In this case, we might need to use a different data structure.

Currently queuing the same command at the end created starvation ( live-lock) like situation.

  1. Multiple queues. One for coordinator input check that is used 99% of time.

Comments?

Regards, Mohammad

tucu00 commented 13 years ago

A few points:

My take is that we should fix unique command queuing, that will solve most of not all the issues.

mislam77-zz commented 13 years ago

Queue uniqueness is already implemented. Surely it reduces the occurrence of problem. However, didn't eliminate that as you mentioned.

As part of concurrency control, we are re-queuing the same command with 500ms delay at the head of the queue. In a high loaded system, the same command could be re-queued and causes livelock like situation. Consider an example where there are nearly 10K unique coordinator input checks. And the maximum concurrency is 40. After first 40, all of them will get re-queued until one command is done. This type of situation continues for sometime.

The similar situation has created a big trouble in production.

tucu00 commented 13 years ago

Well, then the solution would be to use a separate queue exclusively service for coordinator input checks. in that case the threadpool will be the only throttling and no concurrency re-queueing would happen.

mislam77-zz commented 13 years ago

So there will be 2 queues. One for coordinator input checks (queue 1) and other for the rest of the commands (queue 2). In this approach, the questions are:

Can we discuss the other approach too? Using queue in DB.

If we want to implement hot-hot or load balancing system (a possible future direction), I think DB approach will help that. In the current approach, the same queue will be created into both system (although both might not process the same command) resulting the unnecessary overhead of keeping the same element into both queues.

tucu00 commented 13 years ago

It seems to me that the requeueing logic is not correct, it should not alter the order, but just ignore the dup queueing leaving the original one in the existing place in the queue.

Default threadpool size to 120 is a bit too high for a default value. That should be a site configuration value. The optimum size o the threadpool is given by the load of your system and the hardware/OS resources you have.

IMO, a database will be an overkill. I would not replace the existing inmemory solution by a DB solution, rather I'd leverage the fact that services are pluggable and have a DB solution as well. Still, I'd suggest you test your current load with a DB solution.

Regarding the comment that DB approach would be good for a hot-hot solution, load distribution for an immemory solution could be easily handled by doing something like handling IDs that satisfy JOBID MOD ${LIVE_OOZIE_INSTANCES} == ${OOZIE_INSTANCE_ID}, the number of live instances and the intance ID would be dynamically generated/stored in Zookeeper (which would be needed to provide distributed lock support).

mislam77-zz commented 13 years ago

How could we ensure the re-queuing will not disturb the ordering?

tucu00 commented 13 years ago

you'd have a UniqueQueue implementation that has an SET for element IDs.

The add/offer methods of the UniqueQueue will first check if the element is in the ID set, if it is the add/offer does a NOP, if it is not it add the element to the queue and to the ID set. The poll/take/remove of elements have to remove the element from the ID set as well. All this has to be done with proper level of synchronization/locking to avoid race conditions.