r-simmer / simmer

Discrete-Event Simulation for R
https://r-simmer.org
GNU General Public License v2.0
223 stars 42 forks source link

Issues with preemption in one resource when the arrival is enqueued in another #206

Closed Enchufa2 closed 4 years ago

Enchufa2 commented 5 years ago

As @thigger noted in #202, an arrival may end up in two queues (potentially, many more) due to preemption.

For example, this crashes the session: the arrival is dequeued in a resource while enqueued in a previous one.

library(simmer)

lprio <- trajectory() %>%
  seize("one") %>%           # "one" seized
  seize("two") %>%           # enqueued in "two"
  timeout(10) %>%
  release_all()

hprio <- trajectory() %>%
  seize("one") %>%           # preempts lprio in "one"
  set_capacity("two", 1) %>% # dequeues lprio in "two" -> crash
  timeout(100) %>%
  release_all()

simmer(verbose=TRUE) %>%
  add_resource("one", 1, preemptive=TRUE) %>%
  add_resource("two", 0) %>%
  add_generator("lprio", lprio, at(0), priority=0) %>%
  add_generator("hprio", hprio, at(1), priority=1) %>%
  run()
Enchufa2 commented 5 years ago

The complementary issue, another crash: the arrival is preempted and rejected, but only from one queue.

library(simmer)

out <- trajectory() %>%
  log_("rejected")           # still enqueued in "two" -> crash

lprio <- trajectory() %>%
  handle_unfinished(out) %>%
  seize("one") %>%           # "one" seized
  seize("two") %>%           # enqueued in "two"
  timeout(10) %>%
  release_all()

hprio <- trajectory() %>%
  seize("one") %>%           # preempts and rejects lprio from "one"
  timeout(100) %>%
  release_all()

simmer(verbose=TRUE) %>%
  add_resource("one", 1, 0, preemptive=TRUE, queue_size_strict=TRUE) %>%
  add_resource("two", 0) %>%
  add_generator("lprio", lprio, at(0), priority=0) %>%
  add_generator("hprio", hprio, at(1), priority=1) %>%
  run()
Enchufa2 commented 5 years ago

The easiest solution (from the development point of view) would be: queueing somewhere makes the arrival temporarily non-preemptible elsewhere. But is this desirable? Thoughts, @thigger?

thigger commented 5 years ago

It depends exactly what you mean by non-preemptible. In my use case it's usually reductions in capacity as a ward or bed is closed. I'm not sure it's desirable for an arrival to still have a resource seized after capacity is reduced to zero for example.

My preference would probably be to use handle_unfinished if pre-empted from one whilst queueing for another, and to arrive there with neither resource queued or seized.

In the long run, is it possible you might ever legitimately allow an arrival to queue for two resources simultaneously? (I appreciate this would start making trajectories more asynchronous - is this one of the purposes of #121 ?) My previous hospital model (not using r-simmer) was of an ICU and patients had to have both a physical bed and sufficient nurses to look after them - I coded it using look-ahead, but in r-simmer queuing for both simultaneously would be nice.

Enchufa2 commented 5 years ago

It depends exactly what you mean by non-preemptible.

I mean it cannot be kicked out from the server once seized, independently of the mechanism involved (another higher-priority arrival or a capacity reduction).

In my use case it's usually reductions in capacity as a ward or bed is closed. I'm not sure it's desirable for an arrival to still have a resource seized after capacity is reduced to zero for example.

That depends entirely on whether the resource allows preemption or not.

My preference would probably be to use handle_unfinished if pre-empted from one whilst queueing for another, and to arrive there with neither resource queued or seized.

That's not possible with simmer's paradigm, because it would mean that the arrival executes two activities at the same time (superposition of states, quantum trajectories? :D). If the arrival jumps to a handler, then it's active and it must leave all queues. In other words, queueing means to be stuck in a seize activity. If must not execute any other activity while holding that queue. This is what happens above, and thus, the crashes.

In the long run, is it possible you might ever legitimately allow an arrival to queue for two resources simultaneously? (I appreciate this would start making trajectories more asynchronous - is this one of the purposes of #121 ?)

The primary goal of #121 is to support scenarios that are currently not possible or that require very artificial setups. Mainly, a server with multiple queues, but also different servers sharing a queue.

My previous hospital model (not using r-simmer) was of an ICU and patients had to have both a physical bed and sufficient nurses to look after them - I coded it using look-ahead, but in r-simmer queuing for both simultaneously would be nice.

You can do that already: just clone your patient and queue for everything you need. That's the natural way of executing parallel tasks with simmer. Otherwise, the concept of trajectory as a "recipe" is always one activity at a time.

thigger commented 5 years ago

That's not possible with simmer's paradigm, because it would mean that the arrival executes two activities at the same time (superposition of states, quantum trajectories? :D). If the arrival jumps to a handler, then it's active and it must leave all queues. In other words, queueing means to be stuck in a seize activity. If must not execute any other activity while holding that queue. This is what happens above, and thus, the crashes.

I'm suggesting that when it's pre-empted from "one" it should jump to the handler (handle_unfinished) and leave all queues - is this not possible?

So in your examples - the first would result in the lprio simply being dropped at the point hprio seizes "one" (as handle_unfinished isn't set - if it were, lprio would go to the handle_unfinished trajectory with nothing seized and in no queues), and the second would result in lprio arriving in the "out" trajectory with nothing seized and in no queues.

For what it's worth, my cunning plan (hack?) with wards is to define the priority range for queueing so that each ward has a unique priority which allows queuing for that ward. My arrival can then have the priority required to queue for the ward it wants to queue for without accidentally being able to queue in any other when pre-empted.

You can do that already: just clone your patient and queue for everything you need. That's the natural way of executing parallel tasks with simmer.

I did have a fiddle with that - is there any way of the clones collecting seized resources together when they synchronise? At the moment the resources seem to disappear - there's no warning generated about leaving without releasing, but they're not available to other arrivals.

Enchufa2 commented 5 years ago

I'm suggesting that when it's pre-empted from "one" it should jump to the handler (handle_unfinished) and leave all queues - is this not possible?

Ah, ok, I read too fast again. Yes, that's the logical solution if preemption is permitted.

So in your examples - the first would result in the lprio simply being dropped at the point hprio seizes "one" (as handle_unfinished isn't set - if it were, lprio would go to the handle_unfinished trajectory with nothing seized and in no queues),

Why should lprio be dropped from one if there's room in the queue? My line of thought was that lprio should go to one's queue and block two's queue until one is regained. I.e., no arrival should be served when hprio increases the capacity of two.

and the second would result in lprio arriving in the "out" trajectory with nothing seized and in no queues.

Correct.

For what it's worth, my cunning plan (hack?) with wards is to define the priority range for queueing so that each ward has a unique priority which allows queuing for that ward. My arrival can then have the priority required to queue for the ward it wants to queue for without accidentally being able to queue in any other when pre-empted.

Sounds reasonable.

You can do that already: just clone your patient and queue for everything you need. That's the natural way of executing parallel tasks with simmer.

I did have a fiddle with that - is there any way of the clones collecting seized resources together when they synchronise? At the moment the resources seem to disappear - there's no warning generated about leaving without releasing, but they're not available to other arrivals.

I see you saw #207. :) You can take a look at simmer.bricks::do_parallel's code and documentation to see how I managed to ensure that the original arrival is the one that goes through synchronize(). It's tricky, but possible. Still, it means that clones can seize resources, but they have to release them before reaching synchronize(). #207 would relax this requirement to enable more advanced use cases.

thigger commented 5 years ago

I'm suggesting that when it's pre-empted from "one" it should jump to the handler (handle_unfinished) and leave all queues - is this not possible?

Ah, ok, I read too fast again. Yes, that's the logical solution if preemption is permitted.

So in your examples - the first would result in the lprio simply being dropped at the point hprio seizes "one" (as handle_unfinished isn't set - if it were, lprio would go to the handle_unfinished trajectory with nothing seized and in no queues),

Why should lprio be dropped from one if there's room in the queue? My line of thought was that lprio should go to one's queue and block two's queue until one is regained. I.e., no arrival should be served when hprio increases the capacity of two.

In my own use case I'd have the queue turned off (or the arrival set to a priority that prevents queueing) so I'll end up in handle_unfinished whichever option you choose. However, in general it feels odd to me to allow the serving of resource "two" to even temporarily depend on something going on with resource "one". What if there are other arrivals that only depend on resource "two" that now can't be served until lprio re-seizes "one"? Entering handle_unfinished does seem like a bit of an "anything goes" step, but at least hands control back to the user to determine how they'd like to proceed.

Enchufa2 commented 5 years ago

Entering handle_unfinished means that the arrival is dropped from all the queues. It may be fine for your use case, but it will prevent other use cases. However, holding the position in queue until it's served again in previous resources allows these other use cases, and you can achieve yours by tweaking priority levels (if the arrival cannot enter the queue, it will be rejected, and then we have case 1 again, where the arrival enters handle_unfinished).

thigger commented 5 years ago

Entering handle_unfinished means that the arrival is dropped from all the queues. It may be fine for your use case, but it will prevent other use cases.

As I said, for my own case it doesn't matter which you choose - I just don't like the idea of an arrival which is only wanting resource "two" having to wait until lprio re-seizes "one" before it can be served. Even if hprio had set two's capacity to infinite, then presumably everything waiting for "two" would be blocked until there was space in "one".

I guess I'm trying to say that I think resources should be as independent as possible. Though for my own case it really doesn't matter either way so long as the crash is eliminated.

thigger commented 5 years ago

Is there any way lprio could exit the "two" queue, join the "one" queue, and then after seizing "one" rejoin the "two" queue? That way "two" could continue to serve arrivals while lprio queues for "one"? (this may involve the development of a queue for queues!)

Enchufa2 commented 4 years ago

Release v4.4.0 on CRAN with these fixes.