saltstack / salt

Software to automate the management and configuration of any infrastructure or application at scale. Install Salt from the Salt package repositories here:
https://docs.saltproject.io/salt/install-guide/en/latest/
Apache License 2.0
14.18k stars 5.48k forks source link

[BUG] 3007.0 - Vulnerability scanning causes Salt Master denial of service #66519

Open clayoster opened 5 months ago

clayoster commented 5 months ago

Description I have found that my Salt Master servers running 3007.0 become unresponsive on a weekly basis after our internal vulnerability scans run (Tenable Vulnerability Management). This is very similar to the issue described in #64061 that was fixed in versions 3005.2/3006.2 (CVE-2023-20897).

I took a packet capture while running a scan against the server and noticed attempts to start TLS sessions with port 4506 is what triggers the errors below in /var/log/salt/master. The number of errors seems to be equal to the amount of worker processes configured on the master. This only seems to occur when the scan investigates TCP port 4506.

2024-05-13 16:47:48,935 [salt.transport.zeromq:572 ][ERROR   ][189978] Exception in request handler
Traceback (most recent call last):
  File "/opt/saltstack/salt/lib/python3.10/site-packages/salt/transport/zeromq.py", line 566, in request_handler
    request = await asyncio.wait_for(self._socket.recv(), 0.3)
  File "/opt/saltstack/salt/lib/python3.10/asyncio/tasks.py", line 445, in wait_for
    return fut.result()
  File "/opt/saltstack/salt/lib/python3.10/site-packages/zmq/_future.py", line 598, in _handle_recv
    result = recv(**kwargs)
  File "zmq/backend/cython/socket.pyx", line 805, in zmq.backend.cython.socket.Socket.recv
  File "zmq/backend/cython/socket.pyx", line 841, in zmq.backend.cython.socket.Socket.recv
  File "zmq/backend/cython/socket.pyx", line 199, in zmq.backend.cython.socket._recv_copy
  File "zmq/backend/cython/socket.pyx", line 194, in zmq.backend.cython.socket._recv_copy
  File "zmq/backend/cython/checkrc.pxd", line 22, in zmq.backend.cython.checkrc._check_rc
zmq.error.Again: Resource temporarily unavailable
2024-05-13 16:47:49,963 [salt.transport.zeromq:572 ][ERROR   ][189979] Exception in request handler
Traceback (most recent call last):
  File "/opt/saltstack/salt/lib/python3.10/site-packages/salt/transport/zeromq.py", line 566, in request_handler
    request = await asyncio.wait_for(self._socket.recv(), 0.3)
  File "/opt/saltstack/salt/lib/python3.10/asyncio/tasks.py", line 445, in wait_for
    return fut.result()
  File "/opt/saltstack/salt/lib/python3.10/site-packages/zmq/_future.py", line 598, in _handle_recv
    result = recv(**kwargs)
  File "zmq/backend/cython/socket.pyx", line 805, in zmq.backend.cython.socket.Socket.recv
  File "zmq/backend/cython/socket.pyx", line 841, in zmq.backend.cython.socket.Socket.recv
  File "zmq/backend/cython/socket.pyx", line 199, in zmq.backend.cython.socket._recv_copy
  File "zmq/backend/cython/socket.pyx", line 194, in zmq.backend.cython.socket._recv_copy
  File "zmq/backend/cython/checkrc.pxd", line 22, in zmq.backend.cython.checkrc._check_rc
zmq.error.Again: Resource temporarily unavailable
2024-05-13 16:47:49,963 [salt.transport.zeromq:572 ][ERROR   ][189988] Exception in request handler
Traceback (most recent call last):
  File "/opt/saltstack/salt/lib/python3.10/site-packages/salt/transport/zeromq.py", line 566, in request_handler
    request = await asyncio.wait_for(self._socket.recv(), 0.3)
  File "/opt/saltstack/salt/lib/python3.10/asyncio/tasks.py", line 445, in wait_for
    return fut.result()
  File "/opt/saltstack/salt/lib/python3.10/site-packages/zmq/_future.py", line 598, in _handle_recv
    result = recv(**kwargs)
  File "zmq/backend/cython/socket.pyx", line 805, in zmq.backend.cython.socket.Socket.recv
  File "zmq/backend/cython/socket.pyx", line 841, in zmq.backend.cython.socket.Socket.recv
  File "zmq/backend/cython/socket.pyx", line 199, in zmq.backend.cython.socket._recv_copy
  File "zmq/backend/cython/socket.pyx", line 194, in zmq.backend.cython.socket._recv_copy
  File "zmq/backend/cython/checkrc.pxd", line 22, in zmq.backend.cython.checkrc._check_rc
zmq.error.Again: Resource temporarily unavailable
2024-05-13 16:47:50,474 [salt.transport.zeromq:572 ][ERROR   ][189989] Exception in request handler
Traceback (most recent call last):
  File "/opt/saltstack/salt/lib/python3.10/site-packages/salt/transport/zeromq.py", line 566, in request_handler
    request = await asyncio.wait_for(self._socket.recv(), 0.3)
  File "/opt/saltstack/salt/lib/python3.10/asyncio/tasks.py", line 445, in wait_for
    return fut.result()
  File "/opt/saltstack/salt/lib/python3.10/site-packages/zmq/_future.py", line 598, in _handle_recv
    result = recv(**kwargs)
  File "zmq/backend/cython/socket.pyx", line 805, in zmq.backend.cython.socket.Socket.recv
  File "zmq/backend/cython/socket.pyx", line 841, in zmq.backend.cython.socket.Socket.recv
  File "zmq/backend/cython/socket.pyx", line 199, in zmq.backend.cython.socket._recv_copy
  File "zmq/backend/cython/socket.pyx", line 194, in zmq.backend.cython.socket._recv_copy
  File "zmq/backend/cython/checkrc.pxd", line 22, in zmq.backend.cython.checkrc._check_rc
zmq.error.Again: Resource temporarily unavailable
2024-05-13 16:47:50,730 [salt.transport.zeromq:572 ][ERROR   ][189981] Exception in request handler
Traceback (most recent call last):
  File "/opt/saltstack/salt/lib/python3.10/site-packages/salt/transport/zeromq.py", line 566, in request_handler
    request = await asyncio.wait_for(self._socket.recv(), 0.3)
  File "/opt/saltstack/salt/lib/python3.10/asyncio/tasks.py", line 445, in wait_for
    return fut.result()
  File "/opt/saltstack/salt/lib/python3.10/site-packages/zmq/_future.py", line 598, in _handle_recv
    result = recv(**kwargs)
  File "zmq/backend/cython/socket.pyx", line 805, in zmq.backend.cython.socket.Socket.recv
  File "zmq/backend/cython/socket.pyx", line 841, in zmq.backend.cython.socket.Socket.recv
  File "zmq/backend/cython/socket.pyx", line 199, in zmq.backend.cython.socket._recv_copy
  File "zmq/backend/cython/socket.pyx", line 194, in zmq.backend.cython.socket._recv_copy
  File "zmq/backend/cython/checkrc.pxd", line 22, in zmq.backend.cython.checkrc._check_rc
zmq.error.Again: Resource temporarily unavailable


Once these errors occur, the master service becomes completely unresponsive to minion requests. Attempting to issue commands from the affected Salt Master result in an error that the Master is not responding.

user@salt1:~$ sudo salt '*' test.ping
[ERROR   ] Request client send timedout
Salt request timed out. The master is not responding. You may need to run your command with `--async` in order to bypass the congested event bus. With `--async`, the CLI tool will print the job id (jid) and exit immediately without listening for responses. You can then use `salt-run jobs.lookup_jid` to look up the results of the job in the job cache later.


Restarting the salt-master service resolves the issue.

Setup I am running a Master Cluster with 4 servers built on Oracle Linux 8.9 and Salt 3007.0.

Please be as specific as possible and give set-up details.

Steps to Reproduce the behavior Initiating a scan with Tenable against one of the master servers triggers this issue. Based on the similarity to #64061, I imagine a scan from Rapid7 InsightVM / Nexpose would also trigger the issue.

An easier way to reproduce the issue is to use openssl to attempt opening TLS connections to port 4506 in quick succession: for i in {1..30}; do openssl s_client -connect salt.example.com:4506 -tls1_2 </dev/null; sleep .2; done

A restart of the salt-master service brings it back to life.

Expected behavior The Salt Master service should not become unresponsive when port 4506 is investigated by vulnerability scanners or receives other invalid requests.

Versions Report

salt --versions-report (Provided by running salt --versions-report. Please also mention any differences in master/minion versions.) ```yaml Salt Version: Salt: 3007.0 Python Version: Python: 3.10.13 (main, Feb 19 2024, 03:31:20) [GCC 11.2.0] Dependency Versions: cffi: 1.16.0 cherrypy: unknown dateutil: 2.8.2 docker-py: Not Installed gitdb: Not Installed gitpython: Not Installed Jinja2: 3.1.3 libgit2: 1.7.2 looseversion: 1.3.0 M2Crypto: Not Installed Mako: Not Installed msgpack: 1.0.7 msgpack-pure: Not Installed mysql-python: Not Installed packaging: 23.1 pycparser: 2.21 pycrypto: Not Installed pycryptodome: 3.19.1 pygit2: 1.14.1 python-gnupg: 0.5.2 PyYAML: 6.0.1 PyZMQ: 25.1.2 relenv: 0.15.1 smmap: Not Installed timelib: 0.3.0 Tornado: 6.3.3 ZMQ: 4.3.4 Salt Package Information: Package Type: onedir System Versions: dist: oracle 8.9 locale: utf-8 machine: x86_64 release: 5.15.0-205.149.5.1.el8uek.x86_64 system: Linux version: Oracle Linux Server 8.9 ```

Additional context This only seems to affect version 3007.0. I tested with versions 3005.5 and 3006.8 and they log the following messages when attempting to reproduce the issue, but do not become unresponsive.

2024-05-13 19:02:56,152 [salt.payload     :111 ][CRITICAL][1358] Could not deserialize msgpack message. This often happens when trying to read a file not in binary mode. To see message payload, enable debug logging and retry. Exception: unpack(b) received extra data.
tjyang commented 5 months ago

@clayoster, thanks for the one-liner script to bring down 3007 salt-master. I am able to confirm this on my test salt-master.

tjyang commented 5 months ago

I use same oneliner script to bring down 3007.1 salt-master, 3006.8 withstand this test. A salt-master restart will restore the service.

for i in {1..30}; do openssl s_client -connect salt.example.com:4506 -tls1_2 </dev/null; sleep .2; done
tjyang commented 5 months ago

I use same oneliner script to bring down 3007.1 salt-master, 3006.8 withstand this test. A salt-master restart will restore the service.

for i in {1..30}; do openssl s_client -connect salt.example.com:4506 -tls1_2 </dev/null; sleep .2; done

one-liner test still can bring down 4506 after OS upgraded from RockyLinux 8.9 to 8.10(released a few days ago).

dynek commented 4 months ago

Is there a light at the end of the tunnel? I have the same issue, and it doesn’t take much effort to reproduce it.

tjyang commented 4 months ago

Is there a light at the end of the tunnel? I have the same issue, and it doesn’t take much effort to reproduce it. Before this issue is fixed by core dev team which are short of manpower, I would suggest downgrade to LTS 3006.8 version.

clayoster commented 4 months ago

A quick band-aid fix is to configure the local firewall on the master server(s) to drop incoming connections to TCP 4506 from the IP addresses of the vulnerability scanning systems which trigger the issue. If you are running a master cluster behind a load balancer, the same configuration needs to be added to the to the load balancer server as well. This has kept my master cluster stable for over a month now.

@dwoz - Is there any chance a fix for this issue will be included in the next minor release? It allows a pretty simple denial of service attack, whether intentional or not.

dynek commented 4 months ago

@clayoster thing is on my side, just a few machines connecting to their master reproduce this bug. There's not much effort required to reproduce this. Maybe I'm hitting a different bug 🤷 Rollbacked to 3006.6-r0 (version available on Alpine v3.19) for the moment.

clayoster commented 4 months ago

@dynek That sounds like a different issue to me. In my environment, normal minion/master communication does not cause any instability or error logs. All minions and masters are on 3007.1.

clayoster commented 2 months ago

I have noticed that the number of errors that are logged before the master becomes unresponsive seems to match the number of worker threads defined. This makes me think that each time one of these errors occurs, a worker thread becomes unresponsive. Once errors == worker threads, the Salt Master becomes unresponsive.

I noticed there is a break after the logging code that generates the message I have been seeing. If I switch this to continue and try to reproduce the issue, I still see errors logged but the master process does not break and minion communication seems to continue successfully.

https://github.com/saltstack/salt/blob/246d0664577ef72da8bd1f0c4dff0d18b4428b23/salt/transport/zeromq.py#L571-L573

Perhaps the larger issue is whatever is causing "Resource temporarily unavailable" and the except statement to be reached though.

jsansone-pw commented 2 months ago

Believe we may be (were) experiencing the same issue with 3007 - we also have Tenable Vulnerability Scanning software. Unfortunately I cannot test the fix now as we rolled back to 3006.8 since 3007 was essentially broken for us. Also we are getting rid of Tenable in favor of Rapid7, so it would be interesting to determine is this would happen with any vuln scanning or specific to Tenable.

henri9813 commented 2 months ago

In my case, my instance is public because i also manage public server so it's possible it has been scanned by random bot

clayoster commented 2 months ago

@jsansone-pw - I think there is a high likelihood that you would see the same issue. In a previous environment I managed, Rapid7 was in use and triggered the issue #64061 / CVE-2023-20897 which was present in 3005.1/3006.1 and had the same cause and effect as this issue.