Closed cgc closed 3 years ago
closed by 1feab13a784
Looks like this probably isn't a sufficient fix; in some testing I found that small instances like Hobby / Standard-1x actually have 8 CPU cores, so the above auto
code spins up 17 cores, per the logic in psiturk. This is only a problem because having 17 instances winds up using a fair amount of RAM, exceeding the ~500MB memory limits of those smaller instances.
The solution I wound up with involved subclassing the experiment server exported by psiturk to replace the use of multiprocessing.cpu_count()
(which returns 8 on Hobby/1x) to os.environ['WEB_CONCURRENCY']
, a heroku-provided env var that is set to 2 for Hobby/1x. Here's the link to my code: https://github.com/cgc/cocosci-optdisco/blob/22f0c4cf63e/bin/herokuapp.py#L24-L31
Thanks for the update. As a hotfix I've hardcoded 2 threads f9bc03d, which I think should run fine on any Heroku instance.
Our configuration currently has
threads=1
. Given my experiences running the server on Heroku I think we should probably set this toauto
. Below is a plot of Dyno Load (see definitions for metrics in plot here. load is defined as "The load value indicates a runnable task (a process or thread) that is either currently running on a CPU or is waiting for a CPU to run on, but otherwise has all the resources it needs to run. The load value does not include tasks that are waiting on IO.")On the left, you can see traffic as a result of a small pilot (4 participants) on a Hobby core. On the right, you can see traffic as a result of a larger pilot (160 participants) on 5:Standard-2x, and later 9:Standard-1x cores. Since the 1M load max never exceeds 50% (and in many cases doesn't exceed 33%), I think it's worth increasing the number of threads somewhat substantially to make better use of compute resources. Since
threads=auto
sets workers to2 * # CPUS + 1
(code here), this makes it a natural choice for the single and dual core case (which would result in 3 and 5 threads respectively).