Open FooBarWidget opened 10 years ago
From honglilai on May 24, 2013 13:14:39
Issue 880 has been merged into this issue.
From honglilai on May 24, 2013 13:15:12
Labels: -Priority-Medium Priority-High Milestone-4.0.4
From thoughtafter on May 24, 2013 13:18:30
I would think that min_instances should count disabled processes. They are not expected to be disabled for very long.
From honglilai on May 24, 2013 13:28:52
The enable/disable mechanism was originally designed for administrators who want to temporarily isolate processes for debugging or inspection. In that case you want the site to continue to operate normally, so it may be desirable to have Phusion Passenger spawn more processes.
On the other hand, a better approach for such a use case would probably be to count disabled processes as well in the min_processes calculation, and to require the administrator to temporarily increase max_pool_size.
In any case we will still want an option that allows us to limit the number of processes that can be disabled.
From thoughtafter on May 24, 2013 13:29:47
Or probably better would be to treat max_pool_size as absolute. So
min_instances 3 max_pool_size 4
If there are 3 instances, 1 is disabled for OOB work, then you can spawn an additional instance to handle incoming requests. Now there are 4 instances. Let's say 1 is disabled we're fine because min and max are both satisfied. Now if we have 2 disabled at the same time we are below min_instances but spawning a new instance would violate max_pool_size so it shouldn't. Since max_pool_size is set generally to not overwhelm memory it is more important than satisfying min_instances. Or at least that's what I want for my projects.
From thoughtafter on May 24, 2013 13:34:31
Even if I was disabling a process for debugging, I wouldn't want to exceed max_pool_size. If I have max_pool_size at 4, and disable an instance, I don't want the memory overhead of a 5th instance. But again, that's me, other projects may have different needs. In any case it needs to be clear so settings can be chosen appropriately.
From steve@caring.com on May 24, 2013 13:34:45
Yes, I think it's wise never to go over max_pool_size, as we hit physical memory constraints. A min_available_instances config seems like a fair compromise, if we want to assure that there is one instance that is available to service requests (though, of course, the available instances could be stuck in GC as well, in which case presumably requests would just queue up, but this is no worse than having all instances disabled).
From honglilai on May 24, 2013 14:35:19
Agreed w.r.t. never going beyond max_pool_size.
From honglilai on May 25, 2013 13:33:03
Alright, I've thought through the implications of the changed. The following things need to be done, in the specified order:
a. min_instances must count disabling and disabled processes too. b. there should be a queue of processes that have requested OOBW. At any event that may potentially allow one of those processes to be disabled, the queue should be processed as much as possible. c. a new config option, min_enabled, must be introduced. It defaults to 1. A disable call will fail or be postponed indefinitely (depending on caller arguments) until there are at least min_enabled enabled processes.
From honglilai on May 27, 2013 14:18:28
Labels: -Milestone-4.0.4 Milestone-4.0.5
From honglilai on May 29, 2013 07:10:47
Labels: -Milestone-4.0.5 Milestone-4.0.6
From honglilai on June 27, 2013 06:47:09
Labels: -Milestone-4.0.6 Milestone-4.0.7
From honglilai on July 05, 2013 03:23:50
Sorry for keeping moving the milestone. There were a number of high-profile bugs that needed to be resolved first. This issue is high on our todo list now.
Labels: -Milestone-4.0.7 Milestone-4.0.8
From bencrouse on August 15, 2013 09:21:08
Is this fix still pending? I'm seeing this problem on 4.0.10, and disabling OOB GC fixes it.
From honglilai on August 15, 2013 09:26:19
Yes it is pending.
Labels: -Milestone-4.0.8 Milestone-4.0.14
From honglilai on August 24, 2013 08:27:36
Current progress: 33%. With commit 6893718, disabling and disabled processes are counted as well when applying MaxPoolSize constraints.
Status: Started
From honglilai on August 24, 2013 08:30:05
Labels: -Milestone-4.0.14 Milestone-4.0.15
From honglilai on August 25, 2013 13:03:40
Current process: 80%. With commit a91e31c, the number of processes that may concurrent run out-of-band work has been limited to 1. We still need Apache and Nginx config options.
From steve@caring.com on October 23, 2013 11:45:21
Thanks for working on this. Does this still look like it will make it into a near-term release ?
From honglilai on October 23, 2013 12:21:22
It has already made it into a release. Limiting OOBW to 1 process at a time, has been introduced a few releases ago. The ability to allow multiple processes to run OOBW concurrently hasn't been introduced yet, but the current mechanism should already solve your problem.
From honglilai on May 23, 2013 16:45:31
The out-of-band work mechanism could result in suboptimal process spawning. For example when the user has a max_pool_size 3, min_instances 3, and there are 3 processes, then triggering out-of-band work will result in 1 process to be disabled and another to be spawned in order to satisfy min_instances 3. However when out-of-band work is complete, the system now has 4 processes. The 4th process doesn't go away until the pool idle time has passed. This is especially problematic if pool_idle_time is 0.
This problem becomes greater when out-of-band work is triggered by multiple processes at the same time. This can result in an arbitrary number of processes to be spawned.
There should be a limit on the number of processes that can be disabled (reserved for out-of-band work) at the same time.
Original issue: http://code.google.com/p/phusion-passenger/issues/detail?id=892