Closed MxResearch closed 6 years ago
From the looks of it an index of Elasticsearch went haywire. Edit /opt/tpot/etc/tpot.yml
and search for the ELK section. Change Xms values from 512 to 1024 each and uncomment mem_limit
line. Save and reboot.
Monitor ES Output docker logs elasticsearch --follow
to Check for errors.
Did a fresh install of honeypot only and cannot reproduce your findings of restarting honeypots.
Hi @t3chn0m4g3 ,
Additional information: Changed default Ethernet ens33 to eth0 static IP address.
Earlier I had installed honeypot with ELK allocating 6GB of RAM, but doesn't work. As it is not possible to allocate 8GB of RAM as mentioned in T-Pot documentation. So, I opted for honeypot only installation assigning it 4GB of RAM. But the issue still persists.
Note: Mailoney seems to be completely down all the time in both options with ELK and honeypots only installation.
dps.sh command output:
- dps.sh output 1:
[root@weeklyblackboard:/]# dps.sh
========| System |========
Date: Tue Apr 3 04:47:42 UTC 2018
Uptime: 04:47:42 up 31 min, 2 users, load average: 1.60, 1.44, 1.22
CPU temp: +100.0°C +100.0°C +100.0°C +100.0°C
NAME STATUS PORTS
cowrie Up 2 seconds 0.0.0.0:22->2222/tcp,
0.0.0.0:23->2223/tcp
dionaea Up Less than a second 0.0.0.0:20-21->20-21/tcp,
0.0.0.0:42->42/tcp,
0.0.0.0:135->135/tcp,
0.0.0.0:443->443/tcp,
0.0.0.0:445->445/tcp,
0.0.0.0:1433->1433/tcp,
0.0.0.0:1723->1723/tcp,
0.0.0.0:1883->1883/tcp,
0.0.0.0:3306->3306/tcp,
0.0.0.0:69->69/udp,
0.0.0.0:5060-5061->5060-5061/tcp,
0.0.0.0:27017->27017/tcp,
0.0.0.0:5060->5060/udp,
0.0.0.0:8081->80/tcp
elasticpot Up 2 seconds 0.0.0.0:9200->9200/tcp
ewsposter Up 1 second
glastopf Up Less than a second 0.0.0.0:80->80/tcp
honeytrap Up 2 seconds
mailoney DOWN
rdpy Up 2 seconds 0.0.0.0:3389->3389/tcp
vnclowpot Up 2 seconds 0.0.0.0:5900->5900/tcp
[root@weeklyblackboard:/]#
- dps.sh output 2:
[root@weeklyblackboard:/]# dps.sh
========| System |========
Date: Tue Apr 3 04:48:14 UTC 2018
Uptime: 04:48:14 up 32 min, 2 users, load average: 1.28, 1.37, 1.21
CPU temp: +100.0°C +100.0°C +100.0°C +100.0°C
NAME STATUS PORTS
cowrie Exited (0) 5 seconds ago
dionaea Up 6 seconds 0.0.0.0:20-21->20-21/tcp,
0.0.0.0:42->42/tcp,
0.0.0.0:135->135/tcp,
0.0.0.0:443->443/tcp,
0.0.0.0:445->445/tcp,
0.0.0.0:1433->1433/tcp,
0.0.0.0:1723->1723/tcp,
0.0.0.0:1883->1883/tcp,
0.0.0.0:3306->3306/tcp,
0.0.0.0:69->69/udp,
0.0.0.0:5060-5061->5060-5061/tcp,
0.0.0.0:27017->27017/tcp,
0.0.0.0:5060->5060/udp,
0.0.0.0:8081->80/tcp
elasticpot Up 7 seconds 0.0.0.0:9200->9200/tcp
ewsposter Up 6 seconds
glastopf Up 7 seconds 0.0.0.0:80->80/tcp
honeytrap Exited (1) 5 seconds ago
mailoney DOWN
rdpy Up 6 seconds 0.0.0.0:3389->3389/tcp
vnclowpot Up 7 seconds 0.0.0.0:5900->5900/tcp
- dps.sh output 3:
[root@weeklyblackboard:/]# dps.sh
========| System |========
Date: Tue Apr 3 04:52:53 UTC 2018
Uptime: 04:52:53 up 36 min, 2 users, load average: 1.89, 1.53, 1.31
CPU temp: +100.0°C +100.0°C +100.0°C +100.0°C
NAME STATUS PORTS
cowrie DOWN
dionaea DOWN
elasticpot DOWN
ewsposter DOWN
glastopf DOWN
honeytrap DOWN
mailoney DOWN
rdpy DOWN
vnclowpot DOWN
[root@weeklyblackboard:/]#
As you can see in the aforementioned honeypot status , honeypots goes UP and DOWN at each dps.sh command status.
Output for T-Pot service:
[root@weeklyblackboard:/]# systemctl status tpot.service
● tpot.service - tpot
Loaded: loaded (/etc/systemd/system/tpot.service; enabled; vendor preset: enabled)
Active: active (running) since Tue 2018-04-03 04:44:42 UTC; 4s ago
Process: 12208 ExecStopPost=/sbin/iptables -w -D INPUT -p tcp --syn -m state --state NEW -j NFQUEUE (code=exited, status=0/SUCCESS)
Process: 12204 ExecStopPost=/sbin/iptables -w -D INPUT -p tcp -m multiport --dports 1025,50100,8080,8081,9200 -j ACCEPT (code=exited, status=0/SUCCESS)
Process: 12201 ExecStopPost=/sbin/iptables -w -D INPUT -p tcp -m multiport --dports 3306,3389,5060,5061,5601,5900,27017 -j ACCEPT (code=exited, status=0/SUCCESS)
Process: 12197 ExecStopPost=/sbin/iptables -w -D INPUT -p tcp -m multiport --dports 20:23,25,42,69,80,135,443,445,1433,1723,1883,1900 -j ACCEPT (code=exited, status=0
Process: 12189 ExecStopPost=/sbin/iptables -w -D INPUT -p tcp -m multiport --dports 64295:64303,7634 -j ACCEPT (code=exited, status=0/SUCCESS)
Process: 12188 ExecStopPost=/sbin/iptables -w -D INPUT -d 127.0.0.1 -j ACCEPT (code=exited, status=0/SUCCESS)
Process: 12185 ExecStopPost=/sbin/iptables -w -D INPUT -s 127.0.0.1 -j ACCEPT (code=exited, status=0/SUCCESS)
Process: 11405 ExecStop=/usr/local/bin/docker-compose -f /opt/tpot/etc/tpot.yml down -v (code=exited, status=0/SUCCESS)
Process: 12406 ExecStartPre=/sbin/iptables -w -A INPUT -p tcp --syn -m state --state NEW -j NFQUEUE (code=exited, status=0/SUCCESS)
Process: 12402 ExecStartPre=/sbin/iptables -w -A INPUT -p tcp -m multiport --dports 1025,50100,8080,8081,9200 -j ACCEPT (code=exited, status=0/SUCCESS)
Process: 12398 ExecStartPre=/sbin/iptables -w -A INPUT -p tcp -m multiport --dports 3306,3389,5060,5061,5601,5900,27017 -j ACCEPT (code=exited, status=0/SUCCESS)
Process: 12394 ExecStartPre=/sbin/iptables -w -A INPUT -p tcp -m multiport --dports 20:23,25,42,69,80,135,443,445,1433,1723,1883,1900 -j ACCEPT (code=exited, status=0
Process: 12390 ExecStartPre=/sbin/iptables -w -A INPUT -p tcp -m multiport --dports 64295:64303,7634 -j ACCEPT (code=exited, status=0/SUCCESS)
Process: 12386 ExecStartPre=/sbin/iptables -w -A INPUT -d 127.0.0.1 -j ACCEPT (code=exited, status=0/SUCCESS)
Process: 12381 ExecStartPre=/sbin/iptables -w -A INPUT -s 127.0.0.1 -j ACCEPT (code=exited, status=0/SUCCESS)
Process: 12376 ExecStartPre=/bin/chmod 666 /var/run/docker.sock (code=exited, status=0/SUCCESS)
Process: 12369 ExecStartPre=/bin/bash -c /sbin/ip link set $(/sbin/ip address | grep "^2: " | awk '{ print $2 }' | tr -d [:punct:]) promisc on (code=exited, status=0/
Process: 12360 ExecStartPre=/bin/bash -c /sbin/ethtool -K $(/sbin/ip address | grep "^2: " | awk '{ print $2 }' | tr -d [:punct:]) gso off gro off (code=exited, statu
Process: 12350 ExecStartPre=/bin/bash -c /sbin/ethtool --offload $(/sbin/ip address | grep "^2: " | awk '{ print $2 }' | tr -d [:punct:]) rx off tx off (code=exited,
Process: 12334 ExecStartPre=/bin/bash -c docker rmi $(docker images | grep "<none>" | awk '{print $3}') (code=exited, status=1/FAILURE)
Process: 12320 ExecStartPre=/bin/bash -c docker rm -v $(docker ps -aq) (code=exited, status=1/FAILURE)
Process: 12305 ExecStartPre=/bin/bash -c docker volume rm $(docker volume ls -q) (code=exited, status=1/FAILURE)
Process: 12294 ExecStartPre=/usr/local/bin/docker-compose -f /opt/tpot/etc/tpot.yml rm -v (code=exited, status=0/SUCCESS)
Process: 12281 ExecStartPre=/usr/local/bin/docker-compose -f /opt/tpot/etc/tpot.yml down -v (code=exited, status=0/SUCCESS)
Process: 12237 ExecStartPre=/bin/bash -c /opt/tpot/bin/clean.sh on (code=exited, status=0/SUCCESS)
Process: 12218 ExecStartPre=/opt/tpot/bin/updateip.sh (code=exited, status=0/SUCCESS)
Main PID: 12410 (docker-compose)
Tasks: 7
Memory: 20.6M
CPU: 2.017s
CGroup: /system.slice/tpot.service
└─12410 /usr/bin/python /usr/local/bin/docker-compose -f /opt/tpot/etc/tpot.yml up --no-color
Apr 03 04:44:44 weeklyblackboard docker-compose[12410]: Creating dionaea ...
Apr 03 04:44:45 weeklyblackboard docker-compose[12410]: Creating vnclowpot ...
Apr 03 04:44:45 weeklyblackboard docker-compose[12410]: Creating rdpy
Apr 03 04:44:45 weeklyblackboard docker-compose[12410]: Creating elasticpot
Apr 03 04:44:45 weeklyblackboard docker-compose[12410]: Creating vnclowpot
Apr 03 04:44:45 weeklyblackboard docker-compose[12410]: Creating dionaea
Apr 03 04:44:45 weeklyblackboard docker-compose[12410]: Creating honeytrap
[root@weeklyblackboard:/]# systemctl is-active tpot deactivating
Journalctl status
[root@weeklyblackboard:/]# journalctl -xe Apr 03 05:13:19 weeklyblackboard kernel: br-323e64736b36: port 1(veth5d2c5a3) entered forwarding state Apr 03 05:13:19 weeklyblackboard kernel: br-323e64736b36: port 1(veth5d2c5a3) entered forwarding state Apr 03 05:13:19 weeklyblackboard kernel: eth0: renamed from veth38e626b Apr 03 05:13:19 weeklyblackboard kernel: br-0cf2ae5937e2: port 1(veth7d07f11) entered forwarding state Apr 03 05:13:19 weeklyblackboard kernel: br-0cf2ae5937e2: port 1(veth7d07f11) entered forwarding state Apr 03 05:13:19 weeklyblackboard kernel: eth0: renamed from veth4740172 Apr 03 05:13:19 weeklyblackboard kernel: br-25c6f4639e40: port 1(vethe6f0b3e) entered forwarding state Apr 03 05:13:19 weeklyblackboard kernel: br-25c6f4639e40: port 1(vethe6f0b3e) entered forwarding state Apr 03 05:13:19 weeklyblackboard docker-compose[21433]: [318B blob data] Apr 03 05:13:19 weeklyblackboard docker-compose[21433]: ERROR: for mailoney Cannot start service mailoney: driver failed programming external connectivity on endpoint Apr 03 05:13:19 weeklyblackboard docker-compose[21433]: Encountered errors while bringing up the project. Apr 03 05:13:19 weeklyblackboard systemd[1]: tpot.service: Main process exited, code=exited, status=1/FAILURE Apr 03 05:13:20 weeklyblackboard docker-compose[23009]: Stopping dionaea ... Apr 03 05:13:20 weeklyblackboard docker-compose[23009]: Stopping vnclowpot ... Apr 03 05:13:20 weeklyblackboard docker-compose[23009]: Stopping honeytrap ... Apr 03 05:13:20 weeklyblackboard docker-compose[23009]: Stopping elasticpot ... Apr 03 05:13:20 weeklyblackboard docker-compose[23009]: Stopping glastopf ... Apr 03 05:13:20 weeklyblackboard docker-compose[23009]: Stopping cowrie ... Apr 03 05:13:20 weeklyblackboard docker-compose[23009]: Stopping ewsposter ... Apr 03 05:13:20 weeklyblackboard docker-compose[23009]: Stopping rdpy ... Apr 03 05:13:20 weeklyblackboard dockerd[862]: time="2018-04-03T05:13:20.369122207Z" level=error msg="attach: stdout: write unix /var/run/docker.sock->@: write: broken Apr 03 05:13:20 weeklyblackboard dockerd[862]: time="2018-04-03T05:13:20.371408491Z" level=error msg="attach: stdout: write unix /var/run/docker.sock->@: write: broken Apr 03 05:13:20 weeklyblackboard dockerd[862]: time="2018-04-03T05:13:20.384675510Z" level=error msg="attach: stdout: write unix /var/run/docker.sock->@: write: broken Apr 03 05:13:20 weeklyblackboard dockerd[862]: time="2018-04-03T05:13:20.420974686Z" level=error msg="attach failed with error: write unix /var/run/docker.sock->@: writ Apr 03 05:13:20 weeklyblackboard dockerd[862]: time="2018-04-03T05:13:20.443137321Z" level=error msg="attach failed with error: write unix /var/run/docker.sock->@: writ Apr 03 05:13:20 weeklyblackboard dockerd[862]: time="2018-04-03T05:13:20.477315903Z" level=error msg="attach failed with error: write unix /var/run/docker.sock->@: writ Apr 03 05:13:20 weeklyblackboard kernel: br-0f3e77966484: port 1(veth6ce7e9f) entered disabled state Apr 03 05:13:20 weeklyblackboard kernel: veth2b46083: renamed from eth0 Apr 03 05:13:20 weeklyblackboard kernel: br-0f3e77966484: port 1(veth6ce7e9f) entered disabled state Apr 03 05:13:20 weeklyblackboard kernel: device veth6ce7e9f left promiscuous mode Apr 03 05:13:20 weeklyblackboard kernel: br-0f3e77966484: port 1(veth6ce7e9f) entered disabled state Apr 03 05:13:21 weeklyblackboard kernel: br-19a28582a1c9: port 1(vethb47117c) entered disabled state Apr 03 05:13:21 weeklyblackboard kernel: vethaabaeec: renamed from eth0 Apr 03 05:13:21 weeklyblackboard kernel: br-19a28582a1c9: port 1(vethb47117c) entered disabled state Apr 03 05:13:21 weeklyblackboard kernel: device vethb47117c left promiscuous mode Apr 03 05:13:21 weeklyblackboard kernel: br-19a28582a1c9: port 1(vethb47117c) entered disabled state Apr 03 05:13:22 weeklyblackboard dockerd[862]: time="2018-04-03T05:13:22.526018487Z" level=error msg="attach: stderr: write unix /var/run/docker.sock->@: write: broken Apr 03 05:13:23 weeklyblackboard ntpd[911]: Listen normally on 820 br-46f7aa700343 172.18.0.1:123 Apr 03 05:13:23 weeklyblackboard ntpd[911]: Listen normally on 821 br-323e64736b36 172.19.0.1:123 Apr 03 05:13:23 weeklyblackboard ntpd[911]: Listen normally on 822 br-0cf2ae5937e2 172.22.0.1:123 Apr 03 05:13:23 weeklyblackboard ntpd[911]: Listen normally on 823 br-fd4d58d390ef 172.24.0.1:123 Apr 03 05:13:23 weeklyblackboard ntpd[911]: Listen normally on 824 br-25c6f4639e40 172.25.0.1:123 Apr 03 05:13:23 weeklyblackboard ntpd[911]: new interface(s) found: waking up resolver
Need your help in resolving the issue.
Best Regards,
ERROR: for mailoney Cannot start service mailoney: driver failed programming external connectivity on endpoint
which is a hint for tcp port 25 already in use.level=error msg="attach failed with error: write unix /var/run/docker.sock
which seems to be the main culprit. What is the installed docker version (docker -v)?Did a fresh install in ESXi and unfortunately I cannot reproduce the described errors.
Yes, I am using bridged mode for T-Pot.
[root@weeklyblackboard:/]# netstat -tuplen | grep 25
tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 0 9016 980/sendmail: MTA:
udp6 0 0 :::5060 :::* 0 391036 19225/docker-proxy
Docker version:
[root@weeklyblackboard:/]# docker -v
Docker version 1.13.1, build 092cba3
Where does sendmail come from? This is not part of the T-Pot installation.
Hi @t3chn0m4g3 ,
sendmail was installed by me to send emails from T-Pot(honeypot only installation). After removing sendmail utility, all honeypots are up again and running flawless since 2 days.
The only thing to confirm with you is, why the honeypots show their status as 2 or 3 hours running and not 24 hours or 2 days running when up for 2 days ?
Now, I have to test T-Pot honeypot installation with ELK as earlier mentioned ELK was not getting up due to the RAM issue or sendmail port conflicting?
Yes, sendmail port-conflict. Containers will be restarted on daily basis.
Closing this now, since T-Pot is working according to specs after uninstalling sendmail.
Hi ,
Basic support information
Output of htop on Vmware esxi server:
I have used pre-build ISO image of T-Pot 17.10 and running it for 1 month. I am facing similar issue as mentioned by @c0nel in issue no 142 where I am only able to see magenta bar on top but nothing more than that. I followed steps mentioned by @t3chn0m4g3 to resolve the issue, but the issue still persists. When I do dps.sh command on T-Pot all honeypots including ELK seems to be down but after several minutes or restart, status changed for Honeypots (UP), ELK seems to be still down. The same issue occurred on both Vmware Esxi and Workstation.
Addition to that, I installed T-Pot with honeypots only option, still honeypots get down. I troubleshooted the issue by using commands: systemctl status tpot and systemctl is-active tpot. It seems like tpot service is automatically deactivating. Can you please guide me on resolving the issue.
Best Regards,