tailscale / tailscale

The easiest, most secure way to use WireGuard and 2FA.
https://tailscale.com
BSD 3-Clause "New" or "Revised" License
18.58k stars 1.43k forks source link

NAT-PMP: Unable to allocate port for mapping (Mikrotik RouterOS) #11403

Open dragon2611 opened 6 months ago

dragon2611 commented 6 months ago

What is the issue?

Enabling nat-PMP support on routerOS caused the router log to be full of the following message "unable to allocate port for mapping *:0 -> 10.52.0.12:41641, timeout 7200s"

I'm not sure if this is a bug with tailscale's implementation of nat-pmp or a problem mikrotik side, enabling uPNP instead does work (But you do also have to manually create a firewall rule to allow udp/41641 as EST/Related isn't enough allow direct connections it seems).

Upgraded the routerOS side to 7.14.1 which I believe is the latest stable.

Steps to reproduce

Enable Nat-PMP on a Mikrotik router (the trial version of a CHR should work if you need one for testing)

Are there any recent changes that introduced the issue?

No response

OS

No response

OS version

Nixos 23.11

Tailscale version

1.58.2

Other software

No response

Bug report

No response

NetHorror commented 4 months ago

Try to add to firewall before (upper) to deny forward rule:

/ip firewall filter add chain=forward connection-nat-state=dstnat connection-state=related

dragon2611 commented 4 months ago

Whilst that would probably help with the filter dropping the traffic I don’t think it would solve the issue with nat-pmp which is that it’s unable to create a NAT rule.

I think there’s been a new routerOS and tailscale release since I’ve posted this so I may retry enabling NAT-PMP and seeing if it’s still a problem

NetHorror commented 4 months ago

It helped me with the same problem

dafky2000 commented 4 months ago

Also having this same issue with these logs. Here are some notes in case it helps with debugging the issue further.

My original issue is that services running inside a docker using network_mode service:tailscale don't seem to communicate through the LAN, instead they make a "direct" round trip connection to my external WAN IP (which is about 10ms latency compared to 0.5 - 1ms. They did at one point establish a direct connection through the LAN, but I'm pretty sure this is related to Mikrotik RouterOS as it only started occurring once I switched to this router. I tried @NetHorror suggestion but no success.

Here are the results of running tailscale ping between the hosts to try and demonstrate the issue better, in the last two cases, I expect Docker or LAN IP to be utilized.

  1. host_1 ---> host_2 (LAN IP)
  2. host_1 ---> host_1_docker_1 (DOCKER IP)
  3. host_1 ---> host_2_docker_1 (LAN IP)
  4. host_1_docker_1 ---> host_1 (LAN IP)
  5. host_1_docker_1 ---> host_2 (LAN IP)
  6. host_1_docker_1 ---> host_1_docker_2 (WAN IP)
  7. host_1_docker_1 ---> host_2_docker_1 (WAN IP)

Additionally, when I run netcheck on the host machines I receive PortMapping: NAT-PMP but on the dockerized clients I just receive PortMapping:

Edit: I was able to resolve the symptom of my issue by enabling userspace networking and making the Tailnet sidecar docker network_mode: host. Each docker now gets a NAT-PMP response from the router and is routed through the local network appropriately. This doesn't seem ideal but is working in the mean time.

matshch commented 2 weeks ago

It looks like RouterOS doesn't support mapping external port 0 (which should allocate some port on gateway's discretion). Unfortunately, RouterOS also doesn't support choosing another external port if the port specified by one client is already allocated to another client, so the only way to fix this on Tailscale's side is to request some random external port.