nmap / npcap

Nmap Project's Windows packet capture and transmission library
https://npcap.com
Other
2.91k stars 509 forks source link

BUG Npcap: icmp[0]==3 causes pcap readers to not pick up icmp replies. #373

Closed NicholasKChoi closed 3 years ago

NicholasKChoi commented 5 years ago

Information

I am running the version of npcap: 0.99-r7

I am running on the Windows Datacenter in Amazon:

I've also done the following:

Expected Behavior

When running wireshark to capture on the main interface with the filter: not (icmp[0]=3), I expect to capture both the icmp request and reply traffic.

Current Behavior

I only see the request traffic. The reply ICMP traffic does not show up at all. I have confirmed this both with Wireshark (which uses Npcap), and with a custom program I wrote that uses the Npcap Driver as well.

Steps to Reproduce

  1. Launch a Windows Datacenter 2016 in Amazon us-west-1 region.
  2. Install Npcap and Wireshark using the browser of your choice.
  3. Run Wireshark to capture on the Main interface with the filter not (icmp[0]=3).
  4. Generate ICMP traffic (I used the powershell command: ping -l 100 8.8.8.8 -n 10000).
  5. Add the Display Filter to Wireshark: icmp.
  6. You will only see the ICMP request traffic.
dmiller-nmap commented 5 years ago

Thanks for this very detailed bug report. I've done a test on my own system here and was unable to reproduce, but of course it's a very different system. Still, the capture filter compilation and matching should be the same for both, so I'm guessing that the problem is not related to the capture filter. To confirm and narrow down the problem, please provide the following information:

  1. Does the ping utility indicate that a response is being received?
  2. Do you see the response in Wireshark if you do not set a capture filter? If so, please provide hex of the packet from the beginning of the IP header through the ICMP header.
  3. Do you see the response in Wireshark if you set a capture filter of icmp?
  4. Is the ping utility definitely using IPv4? There is no chance that ICMPv6 could be affecting things?
  5. Please provide output of DiagReport on your system.

Just to clarify, I understand that this packet filter is intended to filter out ICMP Destination Unreachable messages, is that correct?

guyharris commented 5 years ago
  1. Does this work if you do the capturing on some flavor of UN*X (Linux, macOS, Solaris, *BSD, etc.)?
NicholasKChoi commented 5 years ago
  1. The ping utility indicates the response is being received. And using tshark with only an -f icmp filter shows both the reply & request.
  2. I have the pcap available now, I can provide the hex later on in the day.
  3. I do.
  4. the ping is definitely using ipv4.
  5. I'll provide this as soon as I can.
  6. This same code and the same tools works on flavors of Unix etc. No issues. there.
guyharris commented 5 years ago
  1. Do the ICMP replies not being captured include VLAN headers?
guyharris commented 5 years ago

So the packets not being seen are packets being received by the host running Wireshark, not being sent by that host, right? That might have the same cause as nmap/npcap#68, but that one appears to be causing problems with packets being sent by the host running the capture program.

dmiller-nmap commented 5 years ago

I think I've found the cause of the problem, but it's going to take some time to work out a solution.

Under NDIS 5, which WinPcap used and the code was originally written for, packets were delivered as a header and a lookahead buffer. If these were contiguous, WinPcap would call bpf_filter on the header buffer directly, and all offsets would Just Work. If they were separate, there is a different function, bpf_filter_with_2_buffers, that was called to take care of translating everything.

Under NDIS 6, which is what Npcap uses, packet data is delivered as a chain of memory descriptor lists (MDLs). It looks like different network layers can just allocate a buffer for their own data/header and attach it in the appropriate place in the chain. Npcap checks for whether the data link layer header is delivered separately in the first MDL, and in this case it allocates a buffer and requests the whole packet copied contiguously into the buffer before calling bpf_filter. If the first MDL is not exactly the data link layer header length then it assumes the first MDL has enough of the header to proceed, and it just passes the first MDL's buffer to bpf_filter.

The bug, it seems, is that the first MDL must contain the data link header and the IP header, but not the ICMP header and data. Therefore the capture filter cannot succeed since there is not enough data in the buffer that is passed to it.

A simple fix would be to always copy the packet data into a contiguous buffer before calling bpf_filter, but that seems like a lot of extra work that could slow down capturing unnecessarily in some cases. It would be great if we could know beforehand how much data the packet filter needs in order to test, but I don't think that's something simple to add. We could use the "snaplen" concept, which is not very well supported in Npcap (e.g. nmap/nmap#339). We would add a new IoCtl and extend PacketSetSnapLen to use it. Currently, PacketSetSnapLen only works on DAG cards, and only when DAG API support is built in, which we currently don't do.

I'm going to think about this for a bit, refactor the NPF_TapExForEachOpen function to better fit the current data model, and then choose a way ahead. I think this is something we can take care of in the next release.

NicholasKChoi commented 5 years ago

Thank you Daniel!

That's an interesting problem to have to solve... I'll keep this in mind and try to find alternative work arounds for bpf filtering by index. I do have another question: do you know why this would not work in an AWS instance, but would work in an equivalent Azure instance. If you have theories, I can take a look into confirming them or not.

Best, Nicholas Choi

On Thu, Mar 7, 2019 at 11:22 AM Daniel Miller notifications@github.com wrote:

I think I've found the cause of the problem, but it's going to take some time to work out a solution.

Under NDIS 5, which WinPcap used and the code was originally written for, packets were delivered as a header and a lookahead buffer. If these were contiguous, WinPcap would call bpf_filter on the header buffer directly, and all offsets would Just Work. If they were separate, there is a different function, bpf_filter_with_2_buffers, that was called to take care of translating everything.

Under NDIS 6, which is what Npcap uses, packet data is delivered as a chain of memory descriptor lists (MDLs). It looks like different network layers can just allocate a buffer for their own data/header and attach it in the appropriate place in the chain. Npcap checks for whether the data link layer header is delivered separately in the first MDL, and in this case it allocates a buffer and requests the whole packet copied contiguously into the buffer before calling bpf_filter. If the first MDL is not exactly the data link layer header length then it assumes the first MDL has enough of the header to proceed, and it just passes the first MDL's buffer to bpf_filter.

The bug, it seems, is that the first MDL must contain the data link header and the IP header, but not the ICMP header and data. Therefore the capture filter cannot succeed since there is not enough data in the buffer that is passed to it.

A simple fix would be to always copy the packet data into a contiguous buffer before calling bpf_filter, but that seems like a lot of extra work that could slow down capturing unnecessarily in some cases. It would be great if we could know beforehand how much data the packet filter needs in order to test, but I don't think that's something simple to add. We could use the "snaplen" concept, which is not very well supported in Npcap (e.g. nmap/nmap#339 https://github.com/nmap/nmap/issues/339). We would add a new IoCtl and extend PacketSetSnapLen to use it. Currently, PacketSetSnapLen only works on DAG cards, and only when DAG API support is built in, which we currently don't do.

I'm going to think about this for a bit, refactor the NPF_TapExForEachOpen function to better fit the current data model, and then choose a way ahead. I think this is something we can take care of in the next release.

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/nmap/nmap/issues/1406#issuecomment-470659039, or mute the thread https://github.com/notifications/unsubscribe-auth/AKN00GVDMxQN0fFRJKGAwYYxNsxcjGVJks5vUWbpgaJpZM4ZFvBP .

-- Electrical Engineering and Computer Science Student at UC Berkeley Linkedin https://www.linkedin.com/in/nickchoiberkeleycs Github https://github.com/NicholasKChoi

guyharris commented 5 years ago

Under NDIS 6, which is what Npcap uses, packet data is delivered as a chain of memory descriptor lists (MDLs).

In the kernel of *BSD-flavored operating systems, packet data is delivered as a chain of mbufs (struct mbuf`).

A simple fix would be to always copy the packet data into a contiguous buffer before calling bpf_filter, but that seems like a lot of extra work that could slow down capturing unnecessarily in some cases.

In the kernel of *BSD-flavored operating systems, the bpf_filter() code has routines - named m_xword() and m_xhalf() in the FreeBSD kernel - that take, as arguments, a pointer into an mbuf chain, a byte offset into the data, and a pointer to an int. They walk through the mbuf chain to find the mbuf that contains the byte at the offset, check whether there are 3 more bytes in the mbuf in the "fetch a 4-byte word" routine or 1 more byte in the mbuf in the "fetch a 1-byte halfword" routine, and:

In both of those cases, it sets the pointed-to int to 0, meaning "no error".

If it doesn't find the mbuf with the first byte, or if not all the bytes are in the mbuf it found and there is no next mbuf or there aren't enough bytes in the next mbuf, it fails, setting the pointed-to int to 1, meaning "error".

Here's the code in question:

dmiller-nmap commented 5 years ago

@guyharris Thanks, that's really helpful! I'm often hesitant to change certain parts of code (like the BPF stuff) and so I look for solutions at layers that I feel more comfortable with. But this certainly seems like the right way to do it, and ought to make things faster, too.

dmiller-nmap commented 5 years ago

This issue ought to be fixed in Npcap 0.991. @NicholasKChoi We would be very grateful if you could confirm the fix, since we were unable to reproduce the issue in our lab.

dmiller-nmap commented 4 years ago

Since there has been no further comment on this bug, we will assume the fix applied in 0.991 and corrected in 0.992 was sufficient to resolve it. If you experience further issues with Npcap 0.9983 or later, please open a new issue and reference this one if applicable.