Closed 0x7fff9 closed 7 years ago
looking better the following is present:
Dec 11 15:40:37 0x001 kernel: [ 6.627089] audit: type=1400 audit(1481470828.686:8): apparmor="STATUS" operation="profile_replace" profile="unconfined" name="docker-default" pid=807 comm="apparmor_
parser"
Dec 11 15:40:37 0x001 kernel: [ 6.634339] aufs 4.x-rcN-20160111
Dec 11 15:40:37 0x001 kernel: [ 6.672762] random: nonblocking pool is initialized
Dec 11 15:40:37 0x001 kernel: [ 6.709152] bridge: automatic filtering via arp/ip/ip6tables has been deprecated. Update your scripts to load br_netfilter if you need this.
Dec 11 15:40:37 0x001 kernel: [ 6.710393] Bridge firewalling registered
Dec 11 15:40:37 0x001 kernel: [ 6.714215] nf_conntrack version 0.5.0 (65536 buckets, 262144 max)
Dec 11 15:40:37 0x001 kernel: [ 6.724117] ip_tables: (C) 2000-2006 Netfilter Core Team
Dec 11 15:40:37 0x001 kernel: [ 6.748646] Initializing XFRM netlink socket
Dec 11 15:40:37 0x001 kernel: [ 6.769386] IPv6: ADDRCONF(NETDEV_UP): docker0: link is not ready
Dec 11 15:40:37 0x001 kernel: [ 7.036446] aufs au_opts_verify:1597:dockerd[818]: dirperm1 breaks the protection by the permission bits on the lower branch
Dec 11 15:40:37 0x001 kernel: [ 7.049334] device eno1 entered promiscuous mode
Dec 11 15:40:37 0x001 kernel: [ 7.071782] aufs au_opts_verify:1597:dockerd[818]: dirperm1 breaks the protection by the permission bits on the lower branch
and in the same minute:
Dec 11 15:40:42 0x001 docker[1318]: 2016-12-11 15:40:42,726 INFO exited: ewsposter (exit status 0; expected)
Dec 11 15:40:43 0x001 docker[1318]: 2016-12-11 15:40:43,728 INFO spawned: 'ewsposter' with pid 39
Dec 11 15:40:44 0x001 docker[1318]: 2016-12-11 15:40:44,730 INFO success: ewsposter entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
and continues..
Only the containers invoking ewsposter have this behavior of constant reboots.
running honeytrap dtagdevsec/honeytrap:latest1610 - -
running glastopf dtagdevsec/glastopf:latest1610 172.17.0.4 80:80
running emobility dtagdevsec/emobility:latest1610 172.17.0.9 8080:8080
running dionaea dtagdevsec/dionaea:latest1610 172.17.0.6 5061:5061 3306:3306 21:21 11211:11211 5060:5060 1900:1900 1433:1433 8081:80 445:445 135:135 5060:5060 1883:1883 1723:1723 443:443 69:69 42:42
All the other kept run really stable. Did you ever experience this before?
thanks! cheers.
Never seen before, from what you are describing my best guess is that ewsposter
cannot access its config file.
Please post a sudo ls -al /data/ews -R
[root@0x001:~]# ls -al /data/ews -R
/data/ews:
total 24
drwxrw---- 6 tpot tpot 4096 Dec 8 23:54 .
drwxrw---- 15 tpot tpot 4096 Dec 8 23:55 ..
drwxrw---- 2 tpot tpot 4096 Dec 11 16:52 conf
drwxrw---- 2 tpot tpot 4096 Dec 8 23:54 dionaea
drwxrw---- 2 tpot tpot 4096 Dec 9 23:28 emobility
drwxrw---- 2 tpot tpot 4096 Dec 8 23:54 log
/data/ews/conf:
total 16
drwxrw---- 2 tpot tpot 4096 Dec 11 16:52 .
drwxrw---- 6 tpot tpot 4096 Dec 8 23:54 ..
-rwxrw---- 1 tpot tpot 1469 Dec 11 16:52 ews.cfg
-rw-r--r-- 1 tpot tpot 13 Dec 11 17:32 ews.ip
/data/ews/dionaea:
total 8
drwxrw---- 2 tpot tpot 4096 Dec 8 23:54 .
drwxrw---- 6 tpot tpot 4096 Dec 8 23:54 ..
/data/ews/emobility:
total 8
drwxrw---- 2 tpot tpot 4096 Dec 9 23:28 .
drwxrw---- 6 tpot tpot 4096 Dec 8 23:54 ..
/data/ews/log:
total 8
drwxrw---- 2 tpot tpot 4096 Dec 8 23:54 .
drwxrw---- 6 tpot tpot 4096 Dec 8 23:54
LGTM, did you edit ews.cfg
?
yep, after beginning to experience this, I set the ews from true to false, reboot. Same behavior. Set it back to true, reboot.
[EWS]
ews = true
can I do something with this process/ conf? On ps -efw |grep ewsposter I see this:
root 3693 2249 0 21:04 ? 00:00:00 bash -c sleep 10 && exec /usr/bin/python /opt/ewsposter/ews.py -c /data/ews/conf/ -m kippo -l 60
and /opt/ewsposter/ews.py
it's only available on the containers having the issues like:
[root@0x001:~]# find / -name ews.py
/var/lib/docker/aufs/mnt/cf5867863820fce84f62665ca375955886ac5610f8db9fd65d2cb679c7e4f73a/opt/ewsposter/ews.py
/var/lib/docker/aufs/mnt/ef4db90cf45a161a67d312a7b8b111c9813e495a4fc5cb72cf4d7dfe70bd23b4/opt/ewsposter/ews.py
/var/lib/docker/aufs/mnt/85b838bec75b11dfae5a4b76abdfad2e7f3e0587acec8269d2ec5517ce6ecd8b/opt/ewsposter/ews.py
/var/lib/docker/aufs/mnt/87728cb94c3cae50a113561982553f2bd081dcb53601c80a44e5778df56bed92/opt/ewsposter/ews.py
/var/lib/docker/aufs/mnt/580c02f1e1128a4cc29922b9ca6c21797c625a6c9acf133a78c1a6b06252969d/opt/ewsposter/ews.py
/var/lib/docker/aufs/diff/d4a498e5183bc65239bc83371c5cbb97a900039639a146398828fafdd52e58c3/opt/ewsposter/ews.py
/var/lib/docker/aufs/diff/483d75895f5d53a3822a9a6277fb3ec4010e55fc61379a66739662044e2510ec/opt/ewsposter/ews.py
/var/lib/docker/aufs/diff/72a183596bd40ca515b7844e7b6ea9634eacf5c2c3ec54c8ef243e3a527d0830/opt/ewsposter/ews.py
/var/lib/docker/aufs/diff/091ce1fe5e4573bddeae4ba8e22be584d2e7f42619b7a42a14c284bed9bb384d/opt/ewsposter/ews.py
/var/lib/docker/aufs/diff/01767a05a8d26e53f2bd0a4c10c28bc30a90a5d8b95c7ee4ca61bf3121baa672/opt/ewsposter/ews.py
Considering the output of PS, do I need a script on /opt/ewsposter?
/opt is empty.
[root@0x001:~]# ls -lsah /opt
total 8.0K
4.0K drwxr-xr-x 2 root root 4.0K Dec 11 17:33 .
4.0K drwxr-xr-x 23 root root 4.0K Dec 8 23:54 ..
expected?
cheers.
copied the status.sh and:
[root@0x001:/home/gg]# ./status.sh
======| System |======
Date: Sun Dec 11 21:11:36 UTC 2016
Uptime: 21:11:36 up 3:39, 1 user, load average: 0.23, 0.24, 0.28
CPU temp: +32.0°C
======| Container: conpot |======
conpot RUNNING pid 9, uptime 3:39:27
======| Container: cowrie |======
cowrie RUNNING pid 9, uptime 3:39:27
ewsposter RUNNING pid 4811, uptime 0:00:06
mysqld RUNNING pid 10, uptime 3:39:27
======| Container: dionaea |======
dionaea RUNNING pid 10, uptime 0:01:28
ewsposter FATAL Exited too quickly (process log may have details)
======| Container: elasticpot |======
elasticpotpy RUNNING pid 9, uptime 3:39:28
======| Container: elk |======
elasticsearch RUNNING pid 9, uptime 3:39:28
kibana RUNNING pid 11, uptime 3:39:28
logstash RUNNING pid 10, uptime 3:39:28
======| Container: emobility |======
centralsystem RUNNING pid 16, uptime 0:01:19
chargepoint1 RUNNING pid 33, uptime 0:01:19
chargepoint10 RUNNING pid 10, uptime 0:01:19
chargepoint11 RUNNING pid 11, uptime 0:01:19
chargepoint2 RUNNING pid 29, uptime 0:01:19
chargepoint3 RUNNING pid 31, uptime 0:01:19
chargepoint4 RUNNING pid 24, uptime 0:01:19
chargepoint5 RUNNING pid 25, uptime 0:01:19
chargepoint6 RUNNING pid 21, uptime 0:01:19
chargepoint7 RUNNING pid 22, uptime 0:01:19
chargepoint8 RUNNING pid 19, uptime 0:01:19
chargepoint9 RUNNING pid 20, uptime 0:01:19
cron RUNNING pid 13, uptime 0:01:19
ewsposter FATAL Exited too quickly (process log may have details)
logmanager RUNNING pid 12, uptime 0:01:19
mysqld RUNNING pid 14, uptime 0:01:19
======| Container: glastopf |======
ewsposter FATAL Exited too quickly (process log may have details)
glastopf RUNNING pid 17, uptime 0:01:11
======| Container: honeytrap |======
ewsposter FATAL Exited too quickly (process log may have details)
honeytrap RUNNING pid 12, uptime 0:01:07
======| Container: suricata |======
p0f RUNNING pid 9, uptime 3:39:28
suricata RUNNING pid 8, uptime 3:39:28
cheers.
As soon as you touch stuff within /var/lib/docker
you can really break things ...
/var/lib/docker/aufs/mnt/cf5867863820fce84f62665ca375955886ac5610f8db9fd65d2cb679c7e4f73a/opt/ewsposter/ews.py
/var/lib/docker/aufs/mnt/ef4db90cf45a161a67d312a7b8b111c9813e495a4fc5cb72cf4d7dfe70bd23b4/opt/ewsposter/ews.py
/var/lib/docker/aufs/mnt/85b838bec75b11dfae5a4b76abdfad2e7f3e0587acec8269d2ec5517ce6ecd8b/opt/ewsposter/ews.py
/var/lib/docker/aufs/mnt/87728cb94c3cae50a113561982553f2bd081dcb53601c80a44e5778df56bed92/opt/ewsposter/ews.py
/var/lib/docker/aufs/mnt/580c02f1e1128a4cc29922b9ca6c21797c625a6c9acf133a78c1a6b06252969d/opt/ewsposter/ews.py
/var/lib/docker/aufs/diff/d4a498e5183bc65239bc83371c5cbb97a900039639a146398828fafdd52e58c3/opt/ewsposter/ews.py
/var/lib/docker/aufs/diff/483d75895f5d53a3822a9a6277fb3ec4010e55fc61379a66739662044e2510ec/opt/ewsposter/ews.py
/var/lib/docker/aufs/diff/72a183596bd40ca515b7844e7b6ea9634eacf5c2c3ec54c8ef243e3a527d0830/opt/ewsposter/ews.py
/var/lib/docker/aufs/diff/091ce1fe5e4573bddeae4ba8e22be584d2e7f42619b7a42a14c284bed9bb384d/opt/ewsposter/ews.py
/var/lib/docker/aufs/diff/01767a05a8d26e53f2bd0a4c10c28bc30a90a5d8b95c7ee4ca61bf3121baa672/opt/ewsposter/ews.py
If you really want to find your way around the container use the docker supported way docker exec -it honeytrap bash
for example or Portainer.
Please post you ews.cfg and compare it to its original here
yes, I remember you told me that so I didn't touch them It's just I see the output of the PS pointing to a file and if I try to find something on that path I see nothing, then looking for that file it only prints containers directories. But I didn't touch them My question is more if the output of the PS is accurate.
root 3693 2249 0 21:04 ? 00:00:00 bash -c sleep 10 && exec /usr/bin/python /opt/ewsposter/ews.py -c /data/ews/conf/ -m kippo -l 60
That path "/opt/ewsposter/ews.py"
is empty, is it expeted?
ews config (same as the one from tpotce repo):
[MAIN]
homedir = /opt/ewsposter/
spooldir = /opt/ewsposter/spool/
logdir = /opt/ewsposter/log/
del_malware_after_send = false
send_malware = true
sendlimit = 400
contact = your_email_address
proxy =
ip =
[EWS]
ews = true
username = community-01-user
token = foth{a5maiCee8fineu7
rhost_first = https://community.sicherheitstacho.eu/ews-0.1/alert/postSimpleMessage
rhost_second = https://community.sicherheitstacho.eu/ews-0.1/alert/postSimpleMessage
[HPFEED]
hpfeed = false
host = 0.0.0.0
port = 0
channels = 0
ident = 0
secret= 0
[EWSJSON]
json = false
jsondir = /data/ews/
[GLASTOPFV3]
glastopfv3 = true
nodeid = glastopfv3-community-01
sqlitedb = /data/glastopf/db/glastopf.db
malwaredir = /data/glastopf/data/files/
[GLASTOPFV2]
glastopfv2 = false
nodeid =
mysqlhost =
mysqldb =
mysqluser =
mysqlpw =
malwaredir =
[KIPPO]
kippo = true
nodeid = kippo-community-01
mysqlhost = localhost
mysqldb = cowrie
mysqluser = cowrie
mysqlpw = s0m3Secr3T!
malwaredir = /data/cowrie/downloads/
[DIONAEA]
dionaea = true
nodeid = dionaea-community-01
malwaredir = /data/dionaea/binaries/
sqlitedb = /data/dionaea/log/dionaea.sqlite
[HONEYTRAP]
honeytrap = true
nodeid = honeytrap-community-01
newversion = true
payloaddir = /data/honeytrap/attacks/
attackerfile = /data/honeytrap/log/attacker.log
[RDPDETECT]
rdpdetect = false
nodeid =
iptableslog =
targetip =
[EMOBILITY]
eMobility = true
nodeid = emobility-community-01
logfile = /data/eMobility/log/centralsystemEWS.log
PS is correct. You see the process on the host, but it runs within the container. Within the container you find the py scripts in /opt
.
Disable the container check in /etc/crontab
. Find the line with check.sh
and comment it. Wait for the containers to fail...
sudo su -
docker exec -it dionaea bash
cd /var/log/supervisor
cat ews*
exit
Please post the output as markdown :bowtie:
ah ha! a more obvious error I guess? :)
root@1f32c96bcc2a:/var/log/supervisor# cat ews*
EWS Poster v1.8.3b (c) by Markus Schroer <markus.schroer@telekom.de>
=> Create ews.idx counterfile
=> Error IP Address in File /data/ews/conf//ews.ip not set. Abort !
EWS Poster v1.8.3b (c) by Markus Schroer <markus.schroer@telekom.de>
=> Error IP Address in File /data/ews/conf//ews.ip not set. Abort !
EWS Poster v1.8.3b (c) by Markus Schroer <markus.schroer@telekom.de>
=> Error IP Address in File /data/ews/conf//ews.ip not set. Abort !
EWS Poster v1.8.3b (c) by Markus Schroer <markus.schroer@telekom.de>
=> Error IP Address in File /data/ews/conf//ews.ip not set. Abort !
[root@0x001:/home/gg]# vi ewsip.sh
[root@0x001:/home/gg]# chmod +x ewsip.sh
[root@0x001:/home/gg]# ./ewsip.sh
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 14 100 14 0 0 363 0 --:--:-- --:--:-- --:--:-- 368
[MAIN]
ip = x.x.x.x
reboot.
FIXED!!! thanks!! YOU RULE!! :D
Hmmm, this should be not necessary, since ews.ip
is set upon each reboot:
cat /etc/rc.local
#!/bin/bash
# Let's add the first local ip to the /etc/issue and external ip to ews.ip file
source /etc/environment
myLOCALIP=$(hostname -I | awk '{ print $1 }')
myEXTIP=$(curl -s myexternalip.com/raw)
sed -i "s#IP:.*#IP: $myLOCALIP ($myEXTIP)#" /etc/issue
sed -i "s#SSH:.*#SSH: ssh -l tsec -p 64295 $myLOCALIP#" /etc/issue
sed -i "s#WEB:.*#WEB: https://$myLOCALIP:64297#" /etc/issue
tee /data/ews/conf/ews.ip << EOF
[MAIN]
ip = $myEXTIP
EOF
echo $myLOCALIP > /data/elk/logstash/mylocal.ip
chown tpot:tpot /data/ews/conf/ews.ip
if [ -f /var/run/check.lock ];
then rm /var/run/check.lock
fi
Can you please verify why rc.local is not properly executed?
sudo su -
cd /etc
./rc.local
it outputs the content of the ews.ip file.
[root@0x001:/etc]# ./rc.local
[MAIN]
ip = x.x.x.x
Every reboot empties that file.
What output do you get here?
sudo curl -s myexternalip.com/raw
I get the same IP address (my external)
Something strange is going on. rc.local
is the script that re-creates ews.ip
upon each restart and writes your external IP in it.
If you have a static IP, you can also skip that part and configure it in ews.cfg
[MAIN]
homedir = /opt/ewsposter/
spooldir = /opt/ewsposter/spool/
logdir = /opt/ewsposter/log/
del_malware_after_send = false
send_malware = true
sendlimit = 400
contact = your_email_address
proxy =
ip = x.x.x.x
even with static IP entered on the cfg file, same behavior, always empty on reboot. Indeed seems to be a rc.local issue because I have the "/etc/issue" changed so that the folks on the data centre don't look at the obvious via the KVMs :)
I have edited the rc.local and comment out the sed lines. still nothing but if a run it manually it gets OK. So it seems that rc.local is not getting executed on the startup?
Maybe; permissions should look like this:
ll rc.local
-rwxr-xr-x 1 root root 592 Aug 22 11:23 rc.local*
But given the changes you made I can only speculate at this point 😅
:)
[root@0x001:/etc]\# ll rc.local
-rwxr-xr-x 1 root root 595 Dec 11 22:52 rc.local*
looks fine and only commented the sed's to /etc/issue. the rest remains original.
I will keep an eye to check the logs in case it happens and try to find why it can't write to /data/ews/conf/ews.cfg
as a quick fix, in case happens to anyone and to make it clear in one sentence. Fix:
print the external IP to /data/ews/conf/ews.ip
Example:
[root@0x001:\etc\]# vi \tmp\ewsip.sh
paste this:
#!/bin/bash
myLOCALIP=$(hostname -I | awk '{ print $1 }')
myEXTIP=$(curl myexternalip.com/raw)
tee /data/ews/conf/ews.ip << EOF
[MAIN]
ip = $myEXTIP
EOF
chown $myuser:$myuser /data/ews/conf/ews.ip
run:
chmod +x ewsip.sh
run:
./ewsip.sh
Thanks a lot for your help!!!
cheers.
root cause found!
apparently the network was taking longer to be available than the execution of /etc/rc.local
added a sleep 1m
to it's beginning and now after reboot everything is stable!!
CHEERS!!
Hi!
after a reboot the syslog started to get spammed about ewsposter. Haven't touched any script. With this issue the tpot becomes very unstable and the containers are constantly rebooting.
Syslog output:
..... and never stops.
Baisc support information
What T-Pot version are you currtently using? 16.10
Are you running on a Intel NUC or a VM? it's an intel based server (Intel Core i5 2500K 3.30Ghz, Memory: 16GB DDR3 RAM, Drive 1 :240GB SSD)
How long has your installation been running? this is the second installation and it's the second time I face this problem. it was up with no issues after reboot for one week.
Did you install any upgrades or packages? No.
Did you modify any scripts? No.
Have you turned persistence on/off? No.
How much RAM available (login via ssh and run
htop
)? 13512MBHow much stress are the CPUs under (login via ssh and run
htop
)? 0.14, 0.39, 0.28How much swap space is being used (login via ssh and run
htop
)? 0How much free disk space is available (login via ssh and run
sudo df -h
)? /dev/sda2 222G 7.3G 204G 4% /What is the current container status (login via ssh and run
sudo start.sh
)?thanks! cheers.