mushorg / conpot

ICS/SCADA honeypot
GNU General Public License v2.0
1.24k stars 415 forks source link

NMAP scan causes a freeze #384

Closed vladalexgit closed 6 years ago

vladalexgit commented 6 years ago

Hi,

I have been struggling recently with something that seems to be a bug.

Running nmap --script s7-info.nse -p 102 172.17.0.2 -n -T5 against conpot works perfectly and gives this output:

Starting Nmap 7.01 ( https://nmap.org ) at 2018-06-30 19:04 EEST
Nmap scan report for 172.17.0.2
Host is up (0.00013s latency).
PORT    STATE SERVICE
102/tcp open  iso-tsap
| s7-info: 
|   Version: 0.0
|   System Name: Technodrome
|   Module Type: Siemens, SIMATIC, S7-200
|   Serial Number: 88111222
|   Plant Identification: Mouser Factory
|_  Copyright: Original Siemens Equipment
Service Info: Device: specialized

Nmap done: 1 IP address (1 host up) scanned in 0.20 seconds

But if I run a normal scan like nmap -sV -p 502 172.17.0.2 -n -T5 against port 502 (which is assigned to the modbus service) and afterwards try to run nmap --script s7-info.nse -p 102 172.17.0.2 -n -T5 again it does not work any more and it takes very long, giving this output:

nmap --script s7-info.nse -p 102 172.17.0.2 -n -T5

Starting Nmap 7.01 ( https://nmap.org ) at 2018-06-30 19:07 EEST
Nmap scan report for 172.17.0.2
Host is up (0.00012s latency).
PORT    STATE SERVICE
102/tcp open  iso-tsap

Nmap done: 1 IP address (1 host up) scanned in 60.18 seconds

Also, the console output of conpot stops sometime during the scan of port 502, so I think the app freezes.

A wget request sent afterwards to the webserver ends with the following output:

HTTP request sent, awaiting response... 

I have attached the log file and the console output that conpot generates after the steps above:

stdout.txt conpot.log

Is this expected behaviour or a bug?

Am I missing something? I have tried running the image from dockerhub and then built conpot from source following the instructions in the README.md on github and have obtained the same results.

Do you have any suggestions?

Vingaard commented 6 years ago

Hello @vladalexgit , "interesting" finding - just to narrow it down (or to remove a possible option), what happens if you don't have the -T Timing flag in the scan - is it the same behavior or different outcome ? Kind regards Mikael Vingaard

vladalexgit commented 6 years ago

@Vingaard I have tried this also without the -T flag and the behavior is the same. I used T5 because I thought things would go faster as I had conpot running on a local docker container.

Vingaard commented 6 years ago

Thanks for the update, I was just wondering in the T5 (the fastest option) was the problem. reviewing fingerprint in the log

2018-06-30 16:04:56,752 Exception occurred in ModbusServer.handle() at sock.recv(): [Errno 104]
... 104= Connection reset by peer 2018-06-30 16:05:01,820 Exception occurred in ModbusServer.handle() at sock.recv(): timed out seams that Modbus is -perhaps - part of the issue?

creolis commented 6 years ago

to answer one particular question in first place: No, this is not expected behaviour. We're either handling errors or crashing horribly (well, let's say: dying gracefully). Having a situation where conpot is still running but frozen is definitely not on our agenda :)

Thanks for your report!

xandfury commented 6 years ago

This line from your stack trace is the root cause:

error: unpack requires a string argument of length 6

the struct.unpack method in the server handler requires packet size to be atleast 6 bytes. This should be fixed and handled in an upcoming release. Stay tuned! :-)

Note to conpot team: This should be covered by mobus tests. This is cool partially because tests make these checks as part of CI but mainly beacuse that is how we like to roll.