saltstack / salt

Software to automate the management and configuration of any infrastructure or application at scale. Get access to the Salt software package repository here:
https://repo.saltproject.io/
Apache License 2.0
14.17k stars 5.48k forks source link

[freebsd] error: [Errno 55] No buffer space available #23196

Closed cedwards closed 9 years ago

cedwards commented 9 years ago

I am trying to test the 2015.2.0 release (based on the current github tag for that version). I have met the packaging requirements, but when I try to activate the RAET transport I get the following error:

{minion,master}.d/transport.conf:

transport: raet
Traceback (most recent call last):
  File "/usr/local/bin/salt-call", line 9, in <module>
    load_entry_point('salt==2015.2.0', 'console_scripts', 'salt-call')()
  File "/usr/local/lib/python2.7/site-packages/salt/scripts.py", line 227, in salt_call
    client.run()
  File "/usr/local/lib/python2.7/site-packages/salt/cli/call.py", line 59, in run
    caller = salt.cli.caller.Caller.factory(self.config)
  File "/usr/local/lib/python2.7/site-packages/salt/cli/caller.py", line 71, in factory
    return RAETCaller(opts, **kwargs)
  File "/usr/local/lib/python2.7/site-packages/salt/cli/caller.py", line 282, in __init__
    self.stack = self._setup_caller_stack(opts)
  File "/usr/local/lib/python2.7/site-packages/salt/cli/caller.py", line 355, in _setup_caller_stack
    sockdirpath=sockdirpath)
  File "/usr/local/lib/python2.7/site-packages/raet/lane/stacking.py", line 77, in __init__
    **kwa)
  File "/usr/local/lib/python2.7/site-packages/raet/stacking.py", line 92, in __init__
    if not self.server.reopen():  # open socket
  File "/usr/local/lib/python2.7/site-packages/ioflo/base/nonblocking.py", line 495, in reopen
    return self.open()
  File "/usr/local/lib/python2.7/site-packages/ioflo/base/nonblocking.py", line 454, in open
    self.ss.setsockopt(socket.SOL_SOCKET, socket.SO_SNDBUF, self.bs)
  File "/usr/local/lib/python2.7/socket.py", line 224, in meth
    return getattr(self._sock,name)(*args)
error: [Errno 55] No buffer space available
jfindlay commented 9 years ago

@cedwards, thanks for the report.

thatch45 commented 9 years ago

Is this on freebsd? Can you give me a version reporr?

cedwards commented 9 years ago

Yes, this is FreeBSD 10.1. Below is the version report:

emma salt # salt --versions-report
           Salt: 2015.2.0
         Python: 2.7.9 (default, Apr  8 2015, 07:01:13)
         Jinja2: 2.7.3
       M2Crypto: 0.22
 msgpack-python: 0.4.2
   msgpack-pure: Not Installed
       pycrypto: 2.6.1
        libnacl: 1.4.0
         PyYAML: 3.11
          ioflo: 1.2.1
          PyZMQ: 14.5.0
           RAET: 0.6.3
            ZMQ: 4.0.5
           Mako: Not Installed
thatch45 commented 9 years ago

@DmitryKuzmenko can you please take care of this one?

DmitryKuzmenko commented 9 years ago

Got it.

DmitryKuzmenko commented 9 years ago

@cedwards I'll investigate this issue, but it would take time. Meanwhile you can try workaround it by increasing OS UDP buffer size to 64MB as suggested in #16502. In FreeBSD it could be done by

sysctl -w kern.ipc.maxsockbuf=67108864

for immediate update and by adding the following to the /etc/sysctl.conf:

kern.ipc.maxsockbuf=67108864
DmitryKuzmenko commented 9 years ago

The issue is in the following. By default max socket buffer size is set to 2MB in FreeBSD. But RAET uses 6.5 MB for LaneStack (uxd socket) in it's default configuration. So there are two ways to resolve the issue:

For the second solution I've added ability to change raet buffer size in salt config (master and minion). The following keys will be accessible to define it (default values are 100 for lane and 2 for road):

raet_lane_bufcnt: 20
raet_road_bufcnt: 2

Risk: decreasing of the buffers count could produce more system calls on the sender side under highload that would decrease performance.

rallytime commented 9 years ago

@cedwards Did the PR above resolve this issue for you?

rallytime commented 9 years ago

This looks resolved to me. Since we didn't hear back yet, I am going to close this. If the problem remains, certainly let us know and we will be happy to take another look! :)