Open ghost opened 4 years ago
It's likely that it's using up too much of the available memory and not leaving enough for other purposes. The node sizes are documented here. Should you wish to switch to bigger nodes, directions are available to update your subscription plan and create a new node pool.
could you confirm that if we upgrade the plan to $50 (which is 15GB), the shippable won't kill the process if we use --max-old-space-size=4096?
Since your jobs continue past that point with a lower --max-old-space-size
setting, you can most likely set --max-old-space-size
to 4GB with 15GB nodes. However, because we don’t really know the demands of your project, we can’t say for certain that it will work and it may still fail elsewhere if your step requires more resources. When a step is running, you can see how much of the available memory is in use on the node page to get an idea of how much memory is used while steps are running.
Description of your issue:
when we set the --max-old-space-size into 3GB-4GB, the process would be killed