On issue #1311, we avoided assigning the full CPU allocation of the larger EC2 instance to the app task because the number of workers is calculated based on how many CPUs there are and we wanted increasing the instance size to result in the same number of workers with more memory each rather than more workers with the same amount of memory each.
Unfortunately, it turns os.cpus().length returns the number of CPUs assigned to the instance, not the allocation assigned to the task, so increasing the instance size did end up just increasing the worker count, not the per-worker memory. If we want to increase the memory per worker, we'll need to make a change to how NUM_WORKERS is calculated. My thought is that we should switch to using os.totalmem() and defining a constant or an environment variable for how much memory to allocate to each worker. Then we could set the worker count as the highest number that provides at least that much RAM per worker.
On issue #1311, we avoided assigning the full CPU allocation of the larger EC2 instance to the app task because the number of workers is calculated based on how many CPUs there are and we wanted increasing the instance size to result in the same number of workers with more memory each rather than more workers with the same amount of memory each.
Unfortunately, it turns
os.cpus().length
returns the number of CPUs assigned to the instance, not the allocation assigned to the task, so increasing the instance size did end up just increasing the worker count, not the per-worker memory. If we want to increase the memory per worker, we'll need to make a change to how NUM_WORKERS is calculated. My thought is that we should switch to usingos.totalmem()
and defining a constant or an environment variable for how much memory to allocate to each worker. Then we could set the worker count as the highest number that provides at least that much RAM per worker.