telekom-security / tpotce

🍯 T-Pot - The All In One Multi Honeypot Platform 🐝
GNU General Public License v3.0
6.85k stars 1.08k forks source link

Honeypot and ELK down with reference to issue no. 142 #192

Closed MxResearch closed 6 years ago

MxResearch commented 6 years ago

Hi ,

Basic support information

What T-Pot version are you currtently using? -- 17.10
Are you running on a Intel NUC or a VM? -> 2 T-Pot instances on 1. Vmware  Esxi server and 2. VMware workstation.
How long has your installation been running? one month
Did you install any upgrades or packages? apt-get update, apt-get upgrade, and update.sh -y 
Did you modify any scripts? no
Have you turned persistence on/off? no (as default)
How much RAM available (login via ssh and run htop)? Vmware Esxi 6 GB and Vmware workstation 4 GB
How much stress are the CPUs under (login via ssh and run htop)? 100 % for both Vmware workstation and Esxi server when honeypots are port forwarded. 
   How much free disk space is available (login via ssh and run sudo df -h)? 
Output:
Filesystem      Size  Used Avail Use% Mounted on
udev            2.5G     0  2.5G   0% /dev
tmpfs           496M   36M  461M   8% /run
/dev/sda2       485G   28G  433G   6% /
tmpfs           2.5G     0  2.5G   0% /dev/shm
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs           2.5G     0  2.5G   0% /sys/fs/cgroup
What is the current container status (login via ssh and run sudo start.sh)?
How much swap space is being used (login via ssh and run htop)? 

Output of htop on Vmware esxi server:

  CPU[||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||100.0%]   Tasks: 89, 301 thr; 8 running
  Mem[||||||||||||||||||||||||||||||||||||||||                       1.07G/4.84G]   Load average: 5.59 7.10 7.04
  Swp[                                                                  0K/7.63G]   Uptime: 01:48:59

  PID USER      PRI  NI  VIRT   RES   SHR S CPU% MEM%   TIME+  Command
 1018 root       20   0  809M 85960 33184 S  2.6  1.7  5:00.43 /usr/bin/dockerd -H fd://
 2605 root       20   0  809M 85960 33184 S  0.0  1.7  0:08.74 /usr/bin/dockerd -H fd://
 9282 root       20   0  614M 29148  7532 S  8.6  0.6  0:01.00 /usr/bin/python /usr/local/bin/docker-compose -f /opt/tpot/etc/tpot.yml up --no-color
 4362 root       20   0  809M 85960 33184 S  0.0  1.7  0:05.75 /usr/bin/dockerd -H fd://
    1 root       20   0 37884  5964  4048 S  0.0  0.1  0:23.73 /sbin/init
23101 root       20   0  560M 22856  9320 S  0.0  0.5  0:11.53 containerd -l unix:///var/run/docker/libcontainerd/docker-containerd.sock --metrics-interval=0 --start-ti
 2913 root       20   0  809M 85960 33184 S  0.0  1.7  0:04.37 /usr/bin/dockerd -H fd://
  247 root       20   0 42032  7212  2752 S  0.0  0.1  0:04.68 /lib/systemd/systemd-journald
 1105 root       20   0  809M 85960 33184 S  0.0  1.7  0:26.92 /usr/bin/dockerd -H fd://
 2602 root       20   0  809M 85960 33184 S  0.0  1.7  0:04.55 /usr/bin/dockerd -H fd://
 2787 root       20   0  809M 85960 33184 S  0.7  1.7  0:06.63 /usr/bin/dockerd -H fd://
 2884 root       20   0  809M 85960 33184 S  0.0  1.7  0:06.60 /usr/bin/dockerd -H fd://
 3391 root       20   0  809M 85960 33184 S  0.0  1.7  0:06.82 /usr/bin/dockerd -H fd://
10407 root       20   0  1532     4     0 S  0.0  0.0  0:00.02 /bin/sh -c /bin/bash -c "exec /opt/p0f/p0f -u p0f -j -o /var/log/p0f/p0f.json -i $(/sbin/ip address | gre
10415 root       20   0  1532     4     0 S  0.0  0.0  0:00.02 /bin/sh -c update.sh && suricata -v -F /etc/suricata/capture-filter.bpf -i $(/sbin/ip address | grep '^2:
10496 root       20   0 44892  3300  1968 S  0.0  0.1  0:00.02 /opt/honeytrap/sbin/honeytrap -D -C /opt/honeytrap/etc/honeytrap/honeytrap.conf -t 5 -u honeytrap -g hone
 1108 root       20   0  809M 85960 33184 S  0.0  1.7  0:08.39 /usr/bin/dockerd -H fd://
 1204 root       20   0  809M 85960 33184 S  0.0  1.7  0:07.04 /usr/bin/dockerd -H fd://
 2694 root       20   0  809M 85960 33184 S  0.0  1.7  0:06.13 /usr/bin/dockerd -H fd://
 2704 root       20   0  809M 85960 33184 S  0.0  1.7  0:05.07 /usr/bin/dockerd -H fd://
 3200 root       20   0  809M 85960 33184 S  0.0  1.7  0:07.53 /usr/bin/dockerd -H fd://
 3238 root       20   0  809M 85960 33184 S  0.0  1.7  0:05.55 /usr/bin/dockerd -H fd://
 3298 root       20   0  809M 85960 33184 S  0.0  1.7  0:07.36 /usr/bin/dockerd -H fd://
 4452 root       20   0  809M 85960 33184 S  0.0  1.7  0:00.87 /usr/bin/dockerd -H fd://
28610 root       20   0  809M 85960 33184 S  0.0  1.7  0:04.04 /usr/bin/dockerd -H fd://
10165 tsec       20   0 26680  4572  3260 R  1.3  0.1  0:00.09 htop
23943 root       20   0  560M 22856  9320 S  0.7  0.5  0:00.41 containerd -l unix:///var/run/docker/libcontainerd/docker-containerd.sock --metrics-interval=0 --start-ti
32538 root       20   0  560M 22856  9320 S  0.0  0.5  0:00.33 containerd -l unix:///var/run/docker/libcontainerd/docker-containerd.sock --metrics-interval=0 --start-ti
  314 root       20   0 45452  5032  3104 S  0.0  0.1  0:03.92 /lib/systemd/systemd-udevd
  902 root       20   0 97296 10276  8188 S  0.0  0.2  0:03.22 /usr/sbin/vmtoolsd
  933 root       20   0 65828 13920 11164 S  0.0  0.3  0:00.03 /usr/lib/vmware-vgauth/VGAuthService -s
  950 root       20   0 28980  2924  2684 S  0.0  0.1  0:00.06 /usr/sbin/cron -f
  954 root       20   0 20228  2720  2424 S  0.0  0.1  0:01.13 /lib/systemd/systemd-logind
  960 root       20   0 65508  5448  4764 S  0.0  0.1  0:00.09 /usr/sbin/sshd -D
  998 root       20   0  269M  6280  5584 S  0.0  0.1  0:00.68 /usr/lib/accountsservice/accounts-daemon
 1000 root       20   0  269M  6280  5584 S  0.0  0.1  0:00.00 /usr/lib/accountsservice/accounts-daemon
  971 root       20   0  269M  6280  5584 S  0.0  0.1  0:00.80 /usr/lib/accountsservice/accounts-daemon

I have used pre-build ISO image of T-Pot 17.10 and running it for 1 month. I am facing similar issue as mentioned by @c0nel in issue no 142 where I am only able to see magenta bar on top but nothing more than that. I followed steps mentioned by @t3chn0m4g3 to resolve the issue, but the issue still persists. When I do dps.sh command on T-Pot all honeypots including ELK seems to be down but after several minutes or restart, status changed for Honeypots (UP), ELK seems to be still down. The same issue occurred on both Vmware Esxi and Workstation.

Addition to that, I installed T-Pot with honeypots only option, still honeypots get down. I troubleshooted the issue by using commands: systemctl status tpot and systemctl is-active tpot. It seems like tpot service is automatically deactivating. Can you please guide me on resolving the issue.

Best Regards,

t3chn0m4g3 commented 6 years ago

From the looks of it an index of Elasticsearch went haywire. Edit /opt/tpot/etc/tpot.yml and search for the ELK section. Change Xms values from 512 to 1024 each and uncomment mem_limit line. Save and reboot.

Monitor ES Output docker logs elasticsearch --follow to Check for errors.

Did a fresh install of honeypot only and cannot reproduce your findings of restarting honeypots.

MxResearch commented 6 years ago

Hi @t3chn0m4g3 ,

Additional information: Changed default Ethernet ens33 to eth0 static IP address.

Earlier I had installed honeypot with ELK allocating 6GB of RAM, but doesn't work. As it is not possible to allocate 8GB of RAM as mentioned in T-Pot documentation. So, I opted for honeypot only installation assigning it 4GB of RAM. But the issue still persists.

Note: Mailoney seems to be completely down all the time in both options with ELK and honeypots only installation.

dps.sh command output:

- dps.sh output 1:

[root@weeklyblackboard:/]# dps.sh
========| System |========
    Date:  Tue Apr  3 04:47:42 UTC 2018
  Uptime:  04:47:42 up 31 min,  2 users,  load average: 1.60, 1.44, 1.22
CPU temp:  +100.0°C +100.0°C +100.0°C +100.0°C

NAME                STATUS                               PORTS
cowrie              Up 2 seconds                         0.0.0.0:22->2222/tcp,
                                                         0.0.0.0:23->2223/tcp
dionaea             Up Less than a second                0.0.0.0:20-21->20-21/tcp,
                                                         0.0.0.0:42->42/tcp,
                                                         0.0.0.0:135->135/tcp,
                                                         0.0.0.0:443->443/tcp,
                                                         0.0.0.0:445->445/tcp,
                                                         0.0.0.0:1433->1433/tcp,
                                                         0.0.0.0:1723->1723/tcp,
                                                         0.0.0.0:1883->1883/tcp,
                                                         0.0.0.0:3306->3306/tcp,
                                                         0.0.0.0:69->69/udp,
                                                         0.0.0.0:5060-5061->5060-5061/tcp,
                                                         0.0.0.0:27017->27017/tcp,
                                                         0.0.0.0:5060->5060/udp,
                                                         0.0.0.0:8081->80/tcp
elasticpot          Up 2 seconds                         0.0.0.0:9200->9200/tcp
ewsposter           Up 1 second
glastopf            Up Less than a second                0.0.0.0:80->80/tcp
honeytrap           Up 2 seconds
mailoney            DOWN
rdpy                Up 2 seconds                         0.0.0.0:3389->3389/tcp
vnclowpot           Up 2 seconds                         0.0.0.0:5900->5900/tcp
[root@weeklyblackboard:/]#
- dps.sh output 2:

[root@weeklyblackboard:/]# dps.sh
========| System |========
    Date:  Tue Apr  3 04:48:14 UTC 2018
  Uptime:  04:48:14 up 32 min,  2 users,  load average: 1.28, 1.37, 1.21
CPU temp:  +100.0°C +100.0°C +100.0°C +100.0°C

NAME                STATUS                               PORTS
cowrie              Exited (0) 5 seconds ago
dionaea             Up 6 seconds                         0.0.0.0:20-21->20-21/tcp,
                                                         0.0.0.0:42->42/tcp,
                                                         0.0.0.0:135->135/tcp,
                                                         0.0.0.0:443->443/tcp,
                                                         0.0.0.0:445->445/tcp,
                                                         0.0.0.0:1433->1433/tcp,
                                                         0.0.0.0:1723->1723/tcp,
                                                         0.0.0.0:1883->1883/tcp,
                                                         0.0.0.0:3306->3306/tcp,
                                                         0.0.0.0:69->69/udp,
                                                         0.0.0.0:5060-5061->5060-5061/tcp,
                                                         0.0.0.0:27017->27017/tcp,
                                                         0.0.0.0:5060->5060/udp,
                                                         0.0.0.0:8081->80/tcp
elasticpot          Up 7 seconds                         0.0.0.0:9200->9200/tcp
ewsposter           Up 6 seconds
glastopf            Up 7 seconds                         0.0.0.0:80->80/tcp
honeytrap           Exited (1) 5 seconds ago
mailoney            DOWN
rdpy                Up 6 seconds                         0.0.0.0:3389->3389/tcp
vnclowpot           Up 7 seconds                         0.0.0.0:5900->5900/tcp

- dps.sh output 3:

[root@weeklyblackboard:/]# dps.sh
========| System |========
    Date:  Tue Apr  3 04:52:53 UTC 2018
  Uptime:  04:52:53 up 36 min,  2 users,  load average: 1.89, 1.53, 1.31
CPU temp:  +100.0°C +100.0°C +100.0°C +100.0°C

NAME                STATUS                               PORTS
cowrie              DOWN
dionaea             DOWN
elasticpot          DOWN
ewsposter           DOWN
glastopf            DOWN
honeytrap           DOWN
mailoney            DOWN
rdpy                DOWN
vnclowpot           DOWN
[root@weeklyblackboard:/]#

As you can see in the aforementioned honeypot status , honeypots goes UP and DOWN at each dps.sh command status.

Output for T-Pot service:

[root@weeklyblackboard:/]# systemctl status tpot.service
● tpot.service - tpot
   Loaded: loaded (/etc/systemd/system/tpot.service; enabled; vendor preset: enabled)
   Active: active (running) since Tue 2018-04-03 04:44:42 UTC; 4s ago
  Process: 12208 ExecStopPost=/sbin/iptables -w -D INPUT -p tcp --syn -m state --state NEW -j NFQUEUE (code=exited, status=0/SUCCESS)
  Process: 12204 ExecStopPost=/sbin/iptables -w -D INPUT -p tcp -m multiport --dports 1025,50100,8080,8081,9200 -j ACCEPT (code=exited, status=0/SUCCESS)
  Process: 12201 ExecStopPost=/sbin/iptables -w -D INPUT -p tcp -m multiport --dports 3306,3389,5060,5061,5601,5900,27017 -j ACCEPT (code=exited, status=0/SUCCESS)
  Process: 12197 ExecStopPost=/sbin/iptables -w -D INPUT -p tcp -m multiport --dports 20:23,25,42,69,80,135,443,445,1433,1723,1883,1900 -j ACCEPT (code=exited, status=0
  Process: 12189 ExecStopPost=/sbin/iptables -w -D INPUT -p tcp -m multiport --dports 64295:64303,7634 -j ACCEPT (code=exited, status=0/SUCCESS)
  Process: 12188 ExecStopPost=/sbin/iptables -w -D INPUT -d 127.0.0.1 -j ACCEPT (code=exited, status=0/SUCCESS)
  Process: 12185 ExecStopPost=/sbin/iptables -w -D INPUT -s 127.0.0.1 -j ACCEPT (code=exited, status=0/SUCCESS)
  Process: 11405 ExecStop=/usr/local/bin/docker-compose -f /opt/tpot/etc/tpot.yml down -v (code=exited, status=0/SUCCESS)
  Process: 12406 ExecStartPre=/sbin/iptables -w -A INPUT -p tcp --syn -m state --state NEW -j NFQUEUE (code=exited, status=0/SUCCESS)
  Process: 12402 ExecStartPre=/sbin/iptables -w -A INPUT -p tcp -m multiport --dports 1025,50100,8080,8081,9200 -j ACCEPT (code=exited, status=0/SUCCESS)
  Process: 12398 ExecStartPre=/sbin/iptables -w -A INPUT -p tcp -m multiport --dports 3306,3389,5060,5061,5601,5900,27017 -j ACCEPT (code=exited, status=0/SUCCESS)
  Process: 12394 ExecStartPre=/sbin/iptables -w -A INPUT -p tcp -m multiport --dports 20:23,25,42,69,80,135,443,445,1433,1723,1883,1900 -j ACCEPT (code=exited, status=0
  Process: 12390 ExecStartPre=/sbin/iptables -w -A INPUT -p tcp -m multiport --dports 64295:64303,7634 -j ACCEPT (code=exited, status=0/SUCCESS)
  Process: 12386 ExecStartPre=/sbin/iptables -w -A INPUT -d 127.0.0.1 -j ACCEPT (code=exited, status=0/SUCCESS)
  Process: 12381 ExecStartPre=/sbin/iptables -w -A INPUT -s 127.0.0.1 -j ACCEPT (code=exited, status=0/SUCCESS)
  Process: 12376 ExecStartPre=/bin/chmod 666 /var/run/docker.sock (code=exited, status=0/SUCCESS)
  Process: 12369 ExecStartPre=/bin/bash -c /sbin/ip link set $(/sbin/ip address | grep "^2: " | awk '{ print $2 }' | tr -d [:punct:]) promisc on (code=exited, status=0/
  Process: 12360 ExecStartPre=/bin/bash -c /sbin/ethtool -K $(/sbin/ip address | grep "^2: " | awk '{ print $2 }' | tr -d [:punct:]) gso off gro off (code=exited, statu
  Process: 12350 ExecStartPre=/bin/bash -c /sbin/ethtool --offload $(/sbin/ip address | grep "^2: " | awk '{ print $2 }' | tr -d [:punct:]) rx off tx off (code=exited,
  Process: 12334 ExecStartPre=/bin/bash -c docker rmi $(docker images | grep "<none>" | awk '{print $3}') (code=exited, status=1/FAILURE)
  Process: 12320 ExecStartPre=/bin/bash -c docker rm -v $(docker ps -aq) (code=exited, status=1/FAILURE)
  Process: 12305 ExecStartPre=/bin/bash -c docker volume rm $(docker volume ls -q) (code=exited, status=1/FAILURE)
  Process: 12294 ExecStartPre=/usr/local/bin/docker-compose -f /opt/tpot/etc/tpot.yml rm -v (code=exited, status=0/SUCCESS)
  Process: 12281 ExecStartPre=/usr/local/bin/docker-compose -f /opt/tpot/etc/tpot.yml down -v (code=exited, status=0/SUCCESS)
  Process: 12237 ExecStartPre=/bin/bash -c /opt/tpot/bin/clean.sh on (code=exited, status=0/SUCCESS)
  Process: 12218 ExecStartPre=/opt/tpot/bin/updateip.sh (code=exited, status=0/SUCCESS)
 Main PID: 12410 (docker-compose)
    Tasks: 7
   Memory: 20.6M
      CPU: 2.017s
   CGroup: /system.slice/tpot.service
           └─12410 /usr/bin/python /usr/local/bin/docker-compose -f /opt/tpot/etc/tpot.yml up --no-color

Apr 03 04:44:44 weeklyblackboard docker-compose[12410]: Creating dionaea ...
Apr 03 04:44:45 weeklyblackboard docker-compose[12410]: Creating vnclowpot ...
Apr 03 04:44:45 weeklyblackboard docker-compose[12410]: Creating rdpy
Apr 03 04:44:45 weeklyblackboard docker-compose[12410]: Creating elasticpot
Apr 03 04:44:45 weeklyblackboard docker-compose[12410]: Creating vnclowpot
Apr 03 04:44:45 weeklyblackboard docker-compose[12410]: Creating dionaea
Apr 03 04:44:45 weeklyblackboard docker-compose[12410]: Creating honeytrap

[root@weeklyblackboard:/]# systemctl is-active tpot deactivating

Journalctl status

[root@weeklyblackboard:/]# journalctl -xe Apr 03 05:13:19 weeklyblackboard kernel: br-323e64736b36: port 1(veth5d2c5a3) entered forwarding state Apr 03 05:13:19 weeklyblackboard kernel: br-323e64736b36: port 1(veth5d2c5a3) entered forwarding state Apr 03 05:13:19 weeklyblackboard kernel: eth0: renamed from veth38e626b Apr 03 05:13:19 weeklyblackboard kernel: br-0cf2ae5937e2: port 1(veth7d07f11) entered forwarding state Apr 03 05:13:19 weeklyblackboard kernel: br-0cf2ae5937e2: port 1(veth7d07f11) entered forwarding state Apr 03 05:13:19 weeklyblackboard kernel: eth0: renamed from veth4740172 Apr 03 05:13:19 weeklyblackboard kernel: br-25c6f4639e40: port 1(vethe6f0b3e) entered forwarding state Apr 03 05:13:19 weeklyblackboard kernel: br-25c6f4639e40: port 1(vethe6f0b3e) entered forwarding state Apr 03 05:13:19 weeklyblackboard docker-compose[21433]: [318B blob data] Apr 03 05:13:19 weeklyblackboard docker-compose[21433]: ERROR: for mailoney Cannot start service mailoney: driver failed programming external connectivity on endpoint Apr 03 05:13:19 weeklyblackboard docker-compose[21433]: Encountered errors while bringing up the project. Apr 03 05:13:19 weeklyblackboard systemd[1]: tpot.service: Main process exited, code=exited, status=1/FAILURE Apr 03 05:13:20 weeklyblackboard docker-compose[23009]: Stopping dionaea ... Apr 03 05:13:20 weeklyblackboard docker-compose[23009]: Stopping vnclowpot ... Apr 03 05:13:20 weeklyblackboard docker-compose[23009]: Stopping honeytrap ... Apr 03 05:13:20 weeklyblackboard docker-compose[23009]: Stopping elasticpot ... Apr 03 05:13:20 weeklyblackboard docker-compose[23009]: Stopping glastopf ... Apr 03 05:13:20 weeklyblackboard docker-compose[23009]: Stopping cowrie ... Apr 03 05:13:20 weeklyblackboard docker-compose[23009]: Stopping ewsposter ... Apr 03 05:13:20 weeklyblackboard docker-compose[23009]: Stopping rdpy ... Apr 03 05:13:20 weeklyblackboard dockerd[862]: time="2018-04-03T05:13:20.369122207Z" level=error msg="attach: stdout: write unix /var/run/docker.sock->@: write: broken Apr 03 05:13:20 weeklyblackboard dockerd[862]: time="2018-04-03T05:13:20.371408491Z" level=error msg="attach: stdout: write unix /var/run/docker.sock->@: write: broken Apr 03 05:13:20 weeklyblackboard dockerd[862]: time="2018-04-03T05:13:20.384675510Z" level=error msg="attach: stdout: write unix /var/run/docker.sock->@: write: broken Apr 03 05:13:20 weeklyblackboard dockerd[862]: time="2018-04-03T05:13:20.420974686Z" level=error msg="attach failed with error: write unix /var/run/docker.sock->@: writ Apr 03 05:13:20 weeklyblackboard dockerd[862]: time="2018-04-03T05:13:20.443137321Z" level=error msg="attach failed with error: write unix /var/run/docker.sock->@: writ Apr 03 05:13:20 weeklyblackboard dockerd[862]: time="2018-04-03T05:13:20.477315903Z" level=error msg="attach failed with error: write unix /var/run/docker.sock->@: writ Apr 03 05:13:20 weeklyblackboard kernel: br-0f3e77966484: port 1(veth6ce7e9f) entered disabled state Apr 03 05:13:20 weeklyblackboard kernel: veth2b46083: renamed from eth0 Apr 03 05:13:20 weeklyblackboard kernel: br-0f3e77966484: port 1(veth6ce7e9f) entered disabled state Apr 03 05:13:20 weeklyblackboard kernel: device veth6ce7e9f left promiscuous mode Apr 03 05:13:20 weeklyblackboard kernel: br-0f3e77966484: port 1(veth6ce7e9f) entered disabled state Apr 03 05:13:21 weeklyblackboard kernel: br-19a28582a1c9: port 1(vethb47117c) entered disabled state Apr 03 05:13:21 weeklyblackboard kernel: vethaabaeec: renamed from eth0 Apr 03 05:13:21 weeklyblackboard kernel: br-19a28582a1c9: port 1(vethb47117c) entered disabled state Apr 03 05:13:21 weeklyblackboard kernel: device vethb47117c left promiscuous mode Apr 03 05:13:21 weeklyblackboard kernel: br-19a28582a1c9: port 1(vethb47117c) entered disabled state Apr 03 05:13:22 weeklyblackboard dockerd[862]: time="2018-04-03T05:13:22.526018487Z" level=error msg="attach: stderr: write unix /var/run/docker.sock->@: write: broken Apr 03 05:13:23 weeklyblackboard ntpd[911]: Listen normally on 820 br-46f7aa700343 172.18.0.1:123 Apr 03 05:13:23 weeklyblackboard ntpd[911]: Listen normally on 821 br-323e64736b36 172.19.0.1:123 Apr 03 05:13:23 weeklyblackboard ntpd[911]: Listen normally on 822 br-0cf2ae5937e2 172.22.0.1:123 Apr 03 05:13:23 weeklyblackboard ntpd[911]: Listen normally on 823 br-fd4d58d390ef 172.24.0.1:123 Apr 03 05:13:23 weeklyblackboard ntpd[911]: Listen normally on 824 br-25c6f4639e40 172.25.0.1:123 Apr 03 05:13:23 weeklyblackboard ntpd[911]: new interface(s) found: waking up resolver

Need your help in resolving the issue.

Best Regards,

t3chn0m4g3 commented 6 years ago
  1. Is the VM bridged?
  2. Mailoney error states that ERROR: for mailoney Cannot start service mailoney: driver failed programming external connectivity on endpoint which is a hint for tcp port 25 already in use.
  3. For some reason there seems to be a docker error level=error msg="attach failed with error: write unix /var/run/docker.sock which seems to be the main culprit. What is the installed docker version (docker -v)?

Did a fresh install in ESXi and unfortunately I cannot reproduce the described errors.

MxResearch commented 6 years ago
  1. Yes, I am using bridged mode for T-Pot.

  2. [root@weeklyblackboard:/]# netstat -tuplen | grep 25

    tcp        0      0 127.0.0.1:25            0.0.0.0:*               LISTEN      0          9016        980/sendmail: MTA:
    udp6       0      0 :::5060                 :::*                                0          391036      19225/docker-proxy
  3. Docker version: [root@weeklyblackboard:/]# docker -v Docker version 1.13.1, build 092cba3

t3chn0m4g3 commented 6 years ago

Where does sendmail come from? This is not part of the T-Pot installation.

MxResearch commented 6 years ago

Hi @t3chn0m4g3 ,

sendmail was installed by me to send emails from T-Pot(honeypot only installation). After removing sendmail utility, all honeypots are up again and running flawless since 2 days.

The only thing to confirm with you is, why the honeypots show their status as 2 or 3 hours running and not 24 hours or 2 days running when up for 2 days ?

Now, I have to test T-Pot honeypot installation with ELK as earlier mentioned ELK was not getting up due to the RAM issue or sendmail port conflicting?

t3chn0m4g3 commented 6 years ago

Yes, sendmail port-conflict. Containers will be restarted on daily basis.

Closing this now, since T-Pot is working according to specs after uninstalling sendmail.