caktus / django-project-template

Django project template for startproject (Requires 2.2+)
211 stars 53 forks source link

Can't deploy to a t2.micro instance #282

Open vkurup opened 7 years ago

vkurup commented 7 years ago

Attempting to deploy to a t2.micro instance fails because of out-of-memory errors during the npm install phase.

EC2 instance type: t2.micro (1 GB RAM)

Setting up the master and the minion works, but 'fab staging deploy' fails:

This can take a long time without output, be patient
[54.144.222.89] sudo: salt -G 'environment:staging' -linfo state.highstate 
[54.144.222.89] out: [ERROR   ] An un-handled exception was caught by salt's global exception handler:
[54.144.222.89] out: OSError: [Errno 12] Cannot allocate memory
[54.144.222.89] out: Traceback (most recent call last):
[54.144.222.89] out:   File "/usr/bin/salt", line 10, in <module>
[54.144.222.89] out:     salt_main()
[54.144.222.89] out:   File "/usr/lib/python2.7/dist-packages/salt/scripts.py", line 455, in salt_main
[54.144.222.89] out:     client.run()
[54.144.222.89] out:   File "/usr/lib/python2.7/dist-packages/salt/cli/salt.py", line 158, in run
[54.144.222.89] out:     for full_ret in cmd_func(**kwargs):
[54.144.222.89] out:   File "/usr/lib/python2.7/dist-packages/salt/client/__init__.py", line 638, in cmd_cli
[54.144.222.89] out:     **kwargs):
[54.144.222.89] out:   File "/usr/lib/python2.7/dist-packages/salt/client/__init__.py", line 1354, in get_cli_event_returns
[54.144.222.89] out:     connected_minions = salt.utils.minions.CkMinions(self.opts).connected_ids()
[54.144.222.89] out:   File "/usr/lib/python2.7/dist-packages/salt/utils/minions.py", line 584, in connected_ids
[54.144.222.89] out:     addrs.update(set(salt.utils.network.ip_addrs(include_loopback=include_localhost)))
[54.144.222.89] out:   File "/usr/lib/python2.7/dist-packages/salt/utils/network.py", line 970, in ip_addrs
[54.144.222.89] out:     return _ip_addrs(interface, include_loopback, interface_data, 'inet')
[54.144.222.89] out:   File "/usr/lib/python2.7/dist-packages/salt/utils/network.py", line 945, in _ip_addrs
[54.144.222.89] out:     else interfaces()
[54.144.222.89] out:   File "/usr/lib/python2.7/dist-packages/salt/utils/network.py", line 754, in interfaces
[54.144.222.89] out:     return linux_interfaces()
[54.144.222.89] out:   File "/usr/lib/python2.7/dist-packages/salt/utils/network.py", line 632, in linux_interfaces
[54.144.222.89] out:     stderr=subprocess.STDOUT).communicate()[0]
[54.144.222.89] out:   File "/usr/lib/python2.7/subprocess.py", line 711, in __init__
[54.144.222.89] out:     errread, errwrite)
[54.144.222.89] out:   File "/usr/lib/python2.7/subprocess.py", line 1235, in _execute_child
[54.144.222.89] out:     self.pid = os.fork()
[54.144.222.89] out: OSError: [Errno 12] Cannot allocate memory
[54.144.222.89] out: Traceback (most recent call last):
[54.144.222.89] out:   File "/usr/bin/salt", line 10, in <module>
[54.144.222.89] out:     salt_main()
[54.144.222.89] out:   File "/usr/lib/python2.7/dist-packages/salt/scripts.py", line 455, in salt_main
[54.144.222.89] out:     client.run()
[54.144.222.89] out:   File "/usr/lib/python2.7/dist-packages/salt/cli/salt.py", line 158, in run
[54.144.222.89] out:     for full_ret in cmd_func(**kwargs):
[54.144.222.89] out:   File "/usr/lib/python2.7/dist-packages/salt/client/__init__.py", line 638, in cmd_cli
[54.144.222.89] out:     **kwargs):
[54.144.222.89] out:   File "/usr/lib/python2.7/dist-packages/salt/client/__init__.py", line 1354, in get_cli_event_returns
[54.144.222.89] out:     connected_minions = salt.utils.minions.CkMinions(self.opts).connected_ids()
[54.144.222.89] out:   File "/usr/lib/python2.7/dist-packages/salt/utils/minions.py", line 584, in connected_ids
[54.144.222.89] out:     addrs.update(set(salt.utils.network.ip_addrs(include_loopback=include_localhost)))
[54.144.222.89] out:   File "/usr/lib/python2.7/dist-packages/salt/utils/network.py", line 970, in ip_addrs
[54.144.222.89] out:     return _ip_addrs(interface, include_loopback, interface_data, 'inet')
[54.144.222.89] out:   File "/usr/lib/python2.7/dist-packages/salt/utils/network.py", line 945, in _ip_addrs
[54.144.222.89] out:     else interfaces()
[54.144.222.89] out:   File "/usr/lib/python2.7/dist-packages/salt/utils/network.py", line 754, in interfaces
[54.144.222.89] out:     return linux_interfaces()
[54.144.222.89] out:   File "/usr/lib/python2.7/dist-packages/salt/utils/network.py", line 632, in linux_interfaces
[54.144.222.89] out:     stderr=subprocess.STDOUT).communicate()[0]
[54.144.222.89] out:   File "/usr/lib/python2.7/subprocess.py", line 711, in __init__
[54.144.222.89] out:     errread, errwrite)
[54.144.222.89] out:   File "/usr/lib/python2.7/subprocess.py", line 1235, in _execute_child
[54.144.222.89] out:     self.pid = os.fork()
[54.144.222.89] out: OSError: [Errno 12] Cannot allocate memory
[54.144.222.89] out: 

Warning: sudo() received nonzero return code 1 while executing 'salt -G 'environment:staging' -linfo state.highstate '!

It happens during the npm install, but salt is also taking up a lot of memory at the time:

screenshot from 2016-12-14 15-44-31

I'm not sure how best to approach this.

dpoirier commented 7 years ago

I wonder if salt-master can be configured to be smaller, since we really don't need it to scale to thousands of clients in this case.

vkurup commented 7 years ago

Good thought. I couldn't find any really clear options on how to do that here: https://docs.saltstack.com/en/latest/ref/configuration/master.html

When I watched the process in real-time, the NPM process kept using an increasing amount of resident memory. It does the same thing on my laptop. There are some suggestions in this thread: https://github.com/npm/npm/issues/9884

One is to tell node to limit npm's memory usage (I copied the params verbatim from that thread, so don't know if they're optimal):

node --max_semi_space_size=1 --max_old_space_size=198 --max_executable_size=148 /usr/bin/npm install

This limited npm's memory usage to about 300M of resident memory (as opposed to about 1500M), but it slowed the install down by about 1 minute (from 1:40 to 2:40)