Open veqryn opened 5 years ago
Solution in https://github.com/googlevr/gvr-unity-sdk/issues/1002 works for me but not ideal
That workaround does not work for me, unfortunately, despite having admin rights.
@veqryn the workaround worked for me, the steps are:
Disable hyper-v (which will required a couple of restarts)
dism.exe /Online /Disable-Feature:Microsoft-Hyper-V
When you finish all the required restarts, reserve the port you want so hyper-v doesn't reserve it back
netsh int ipv4 add excludedportrange protocol=tcp startport=50051 numberofports=1
Re-Enable hyper-V (which will require a couple of restart)
dism.exe /Online /Enable-Feature:Microsoft-Hyper-V /All
when your system is back, you will be able to bind to that port successfully.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
comment.
Stale issues will be closed after an additional 30d of inactivity.
Prevent issues from auto-closing with an /lifecycle frozen
comment.
If this issue is safe to close now please do so.
Send feedback to Docker Community Slack channels #docker-for-mac or #docker-for-windows. /lifecycle stale
/remove-lifecycle stale
What's the status for this?
Today I had 100 of reservations which caused Skype for Business to stop working since it couldn't find any available ports. Uninstall docker/hyper-v/containers removed these reservations and Skype for Business had ports to work with again.
This is a critical error that should be focused on since it reserves so many ports that isn't in use. I can't uninstall docker/hyper-v/containers or similar workarounds each time I get problems with Skype for Business in conjunction with meetings.
This makes hyper-v unusable for me and most of my company.
@enashed Does your solution (disabling and re-enabling HyperV) have any side effects? Will my virtual switches and virtual machines still be there after applying your solution?
Answering myself: This actually does have side effects. It deletes your virtual switches. You should keep that in mind when applying this solution.
this issue is still present /remove-lifecycle stale
Hi guys, I've the same problem. This is really blocking me.
@enashed's answer worked for me perfectly! Thanks!
IntelliJ idea community edition doesn't start because it tries to bind first available the port in the range 6942-6991.
And following command shows this port range is reserved/blocked by hyper-v/docker-for-windows. Frankly I don't know whether it's because of docker or some other app.
netsh interface ipv4 show excludedportrange protocol=tcp
Protocol tcp Port Exclusion Ranges
Start Port End Port
1583 1682
1683 1782
2480 2579
4492 4591
5357 5357
5614 5713
5834 5933
5940 6039
6045 6144
6276 6375
6491 6590
6897 6996
7003 7102
28385 28385
50000 50059 *
The "hns" service is very... greedy.
This is /obviously/ not docker's problem (as best I can tell), it's probably not even hyperv's. Commenting here as this seems to be a frustrating and common end of journey for googlers. What follows is at least "one" of the resolutions/explanations.
On one of my machines the dynamic port range was not updated to the "new" start port, and I guess related to a resolved bug in windows has now "exposed" this as a serious problem (e.g.: I couldn't even bind to port 3000 for node dev -- access denied is I think a valid response, but it's not the typical "port in use" root cause).
Current dynamic port config:
> netsh int ipv[46] show dynamicport tcp
Unless you know you've mucked with these settings, and if it doesn't specify
49152
as "Start Port" and is set to1025
, it's not "current". I don't know if there is some kind of migration bug when they were patching this new value or what. Dynamic start port for udp was set correctly for example.
To set it to the current config:
> netsh int ipv[46] set dynamic tcp start=49152 num=16384
(Likely a reboot of your host is required.)
While it's bizarre that I only just ran into this issue less than 4 hours ago -- been doing docker/node/go dev for the last few months straight, using docker edge, etc. This appears to have resolved my port exclusion issues (I have no large ranges of reserved ports below 50000 now, previously had 1000 port range exclusions all over the place.)
@cpietrzykowski this is it, wow, thanks for finding that. I can't remember how many hours I've spent debugging this... I've just tried that, rebooted, winnfsd starts, nodejs starts...
Where did that "invalid" range start (1024) come from? Something must have updated it back to 1024 and the only thing I can find in common from 4 systems I've encountered that problems is the moment of enabling hyper-v for "Docker for win" purposes.
@cpietrzykowski this is it, wow, thanks for finding that. I can't remember how many hours I've spent debugging this... I've just tried that, rebooted, winnfsd starts, nodejs starts...
Where did that "invalid" range start (1024) come from? Something must have updated it back to 1024 and the only thing I can find in common from 4 systems I've encountered that problems is the moment of enabling hyper-v for "Docker for win" purposes.
It was not "invalid", it was the previous system version default (the above link has all the authoritative information I have on it). Hyper-v is just doing what it's supposed to do, ensuring it has network ports available for its own management. The problem is (there's another docker issue that explains this part), that Microsoft had a "bug" in port binding, and that's since been fixed. Which is why applications adding port exclusions have turned into a headache for a few of us caught in the middle.
I don't think any of the above is misinformation, that's as much as I know of this issue.
For anyone still having issues with the reserved port-range I found this blog post very useful: https://dandini.wordpress.com/tag/administered-port-exclusions/
The main thing is to prevent the Windows Container Host Networking Service (HNS) from reserving the port-range by adding an undocumented registry key that disables HNS from excluding the ports.
reg add HKLM\SYSTEM\CurrentControlSet\Services\hns\State /v EnableExcludedPortRange /d 0 /f
This worked for me, although I did not test if this had any Docker/Hyper-V related side-effects.
A combination of @cpietrzykowski and @cameron-pascal's solutions worked for me. Steps I followed:
netsh int ipv4 show dynamicport tcp
and verified that the range trampled commonly-used ports (postgresql's :5432
in my case)netsh int ipv4 set dynamic tcp start=49152 num=1638
to reset it to something sanereg add HKLM\SYSTEM\CurrentControlSet\Services\hns\State /v EnableExcludedPortRange /d 0 /f
for good measure as netsh int ipv4 show excludedportrange protocol=tcp
showed a lot of excluded portsEdit: So I guess it's confirmed that this is not Docker-for-win's fault, however the error message could be more helpful in identifying the actual issue (port exclusions).
I've submitted feedback via the Windows Feedback hub, upvote here: https://aka.ms/AA6z7h1
Just for anyone else running into this and none of the above helped to release ports 80 and 81, I had IIS running on my computer when I was testing something, forgot to turn it off. Ports 80 blocked by IIS
A combination of @cpietrzykowski and @cameron-pascal's solutions worked for me. Steps I followed:
- I ran
netsh int ipv4 show dynamicport tcp
and verified that the range trampled commonly-used ports (postgresql's:5432
in my case)- Ran
netsh int ipv4 set dynamic tcp start=49152 num=1638
to reset it to something sane- Also ran
reg add HKLM\SYSTEM\CurrentControlSet\Services\hns\State /v EnableExcludedPortRange /d 0 /f
for good measure asnetsh int ipv4 show excludedportrange protocol=tcp
showed a lot of excluded ports- Reboot
Edit: So I guess it's confirmed that this is not Docker-for-win's fault, however the error message could be more helpful in identifying the actual issue (port exclusions).
So when u say reboot in step 4) you mean reboot computer i guess ?
I have the same error
A combination of @cpietrzykowski and @cameron-pascal's solutions worked for me. Steps I followed:
- I ran
netsh int ipv4 show dynamicport tcp
and verified that the range trampled commonly-used ports (postgresql's:5432
in my case)- Ran
netsh int ipv4 set dynamic tcp start=49152 num=1638
to reset it to something sane- Also ran
reg add HKLM\SYSTEM\CurrentControlSet\Services\hns\State /v EnableExcludedPortRange /d 0 /f
for good measure asnetsh int ipv4 show excludedportrange protocol=tcp
showed a lot of excluded ports- Reboot
Edit: So I guess it's confirmed that this is not Docker-for-win's fault, however the error message could be more helpful in identifying the actual issue (port exclusions).
I think you meant num=16384
?
I started facing this issue like a month ago. Some software update (maybe docker for windows, maybe other, maybe windows itself) had changed the default dynamic port range to start from 1024 (so back to where we were in 03') instead 49152. I almost uninstalled every possible software, until I found @Sidneys1 solution. Thank you guys for this thread.
Thank you for this thread, same issue as @miqm had here. Exclusion range was set to start at 1024, making most development ports unavailable
Same here, fixed using netsh
per @cpietrzykowski's post above. If you tail the logfile while this is happening, you can see docker just restarting itself every time it can't bind to the port. Docker folks: perhaps you could detect this and give an error rather than just looping forever?
Issues go stale after 90 days of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
comment.
Stale issues will be closed after an additional 30 days of inactivity.
Prevent issues from auto-closing with an /lifecycle frozen
comment.
If this issue is safe to close now please do so.
Send feedback to Docker Community Slack channels #docker-for-mac or #docker-for-windows. /lifecycle stale
Confirmed this issue on Windows 10 build 18363 (1909), And it seems like the workaround suggested above deleted my docker-desktop
distro in WSL for some reason XD
Thank you. I had same problem, after updating windows to Windows 10 Pro 20H2 Build 19042.572 yesterday, my WSL 1 wasnt able to connecto the docker on port 2375 anymore.
Instructions from above solved my issue:
Disable hyper-v (which will required a couple of restarts)
dism.exe /Online /Disable-Feature:Microsoft-Hyper-V
When you finish all the required restarts, reserve the port you want so hyper-v doesn't reserve it back
netsh int ipv4 add excludedportrange protocol=tcp startport=2375 numberofports=1
Re-Enable hyper-V (which will require a couple of restart)
dism.exe /Online /Enable-Feature:Microsoft-Hyper-V /All
It seems to me that somehow a Vswitch stuck, held a port, and never released it. For me command:
netcfg -d
works. Besides, it seems to have no side effects when disabling and enabling hyper-v does.
https://github.com/docker/for-win/issues/1804#issuecomment-371555979
I just tripped over this issue as well. Non of the fixes here seem very reasonable.
/remove-lifecycle stale
Can we at least set the range of ports that are being reserved? For me it does reserve the most used development ports around 8000.
I just tripped over this issue as well. Non of the fixes here seem very reasonable.
Have you tried netcfg -d ? Because that works for me.
This is /obviously/ not docker's problem (as best I can tell), it's probably not even hyperv's. Commenting here as this seems to be a frustrating and common end of journey for googlers. What follows is at least "one" of the resolutions/explanations.
On one of my machines the dynamic port range was not updated to the "new" start port, and I guess related to a resolved bug in windows has now "exposed" this as a serious problem (e.g.: I couldn't even bind to port 3000 for node dev -- access denied is I think a valid response, but it's not the typical "port in use" root cause).
Current dynamic port config:
> netsh int ipv[46] show dynamicport tcp
Unless you know you've mucked with these settings, and if it doesn't specify
49152
as "Start Port" and is set to1025
, it's not "current". I don't know if there is some kind of migration bug when they were patching this new value or what. Dynamic start port for udp was set correctly for example.To set it to the current config:
> netsh int ipv[46] set dynamic tcp start=49152 num=16384
(Likely a reboot of your host is required.)
While it's bizarre that I only just ran into this issue less than 4 hours ago -- been doing docker/node/go dev for the last few months straight, using docker edge, etc. This appears to have resolved my port exclusion issues (I have no large ranges of reserved ports below 50000 now, previously had 1000 port range exclusions all over the place.)
I ran into this issue after upgrading to Windows 10 version 20H2. The tcp start port was set to 1024.
It appears that the range should have been set to 49152 since Windows Vista:
So no idea why it was set this way, or why it suddenly started causing issues.
Hi everyone!
I suddenly faced the same problem today with RabbitMQ port (5672). I was switched the "Docker for Windows" to WLS 2 some monthes ago. After reading some articles I tried disabling Hyper-V since it seems that Docker for Windows based on WSL 2 no longer depends on HyperV - so I'm:
netsh interface ipv4 show excludedportrange protocol=tcp
- output is:
PS C:\Users\Valeriy Abakumov> netsh interface ipv4 show excludedportrange protocol=tcp
Protocol tcp Port Exclusion Ranges
Start Port End Port
5357 5357
49879 49978
50000 50059 *
50060 50159
50160 50259
50260 50359
50360 50459
50460 50559
50816 50915
So... this work's for me! After that I was able to run RabbitMQ again on port 5672. And Docker for Windows based on WLS2 really no depends on Hyper-V - after PC restarting all my containers raised up successfully.
Hope this help.
This is often caused by the Windows NAT Driver (winnat), stopping and restarting that service may resolve the issue.
net stop winnat
docker start ...
net start winnat
Solution originally posted at http://www.herlitz.nu/2020/12/01/docker-error-ports-are-not-available-on-windows-10/
I just solved a very severe case of this and found the culprit (in my case at least): "Container-Manager-Service" (might be the german version of the name) or "CmService.dll". In my case, almost all the dynamic ranges were excluded, which can be seen here: excludedportrange.txt. I tried a lot of different things (all of which are mentioned in this thread) and nothing worked. With clues from here and around the internet I started disabling services: "Hyper-V-Hostserverdienst / Hyper-V Host Compute Service" (vmcompute.exe) and "Hostnetzwerkdienst / Host Network Service, HNS" (hostnetsvc.dll) and lastly the aforementioned "Container-Manager-Service" because it was in the dependency hierarchy. I restarted between each attempt because I suspected a bug and the ranges not being released after merely stopping the service in services.msc. This is how I ultimately found the culprit.
The whole odyssey started because, after updating to Docker for Desktop 3.0 (might be coincidence), I randomly couldn't reach my dev containers (blazor) which were started through VS2019. The container started successfully, but the port couldn't be reached through the browser and netstat showed the port was not actually bound. Manually starting test containers resulted in the infamous message "An attempt was made to access a socket in a way forbidden by its access permissions.", which brings me to my question: How does docker actually decide which dynamic ports to use if I don't specify it? For example by running docker run -d -p 80 docker/getting-started
on my machine it decided to use 55004 which was excluded before. How does docker not know that this port is not available and can this behaviour be improved? Firefox, for example, was just using ports high up in the 60000 range. I'm not blaming docker for excluding a lot of ports, just for trying to run containers in excluded port ranges.
@Krimsar I totally agree. Perhaps Hyper-V is behaving like an ass here, and reserving ports ranges that not only are assumingly random, but that also change over time, messing with applications.
But that is no reason for docker not to complain loudly about it. I absolutely expect docker run
to fail and the log or error message to show the reason prominently and what to do about it!
Also, for me this also happened in connection with the Docker 3.0 upgrade.
This is often caused by the Windows NAT Driver (winnat), stopping and restarting that service may resolve the issue.
net stop winnat docker start ... net start winnat
Thanks, @EricHerlitz. Came here because I remembered the issue, and it happens with other programs, not just Docker. Just now it was preventing PhpStorm from starting. Sometimes it's the Protonmail bridge, sometimes it's the Keepass RPC port… Stopping the Hyper-V service did nothing, but these commands seem to have freed the the ports. I still need to investigate more to make sure that works every time.
Edit:
Confirmed. Just stopping and starting winnat
makes the impacted ports available again.
Came seeking a fix for a truly egregious bug and can also confirm net stop winnat > net start winnat fixes it, thanks!
This is often caused by the Windows NAT Driver (winnat), stopping and restarting that service may resolve the issue.
net stop winnat docker start ... net start winnat
Solution originally posted at http://www.herlitz.nu/2020/12/01/docker-error-ports-are-not-available-on-windows-10/
I just did
net stop winnat
and it worked. I have access to all ports without a hassle. I don't know what side effects that may have though.
I just did
net stop winnat
and it worked. I have access to all ports without a hassle. I don't know what side effects that may have though.
Winnat handles host networking so any started VM's or Containers will lose their networking during the operation. As long as you start it using net start winnat
when done starting your containers you should be fine.
Wasn't this issue resolved in a Windows build released months ago already? Why are people still discussing this? The real fix is to upgrade to a more recent build... unsubscribes from thread
Wasn't this issue resolved in a Windows build released months ago already? Why are people still discussing this? The real fix is to upgrade to a more recent build... unsubscribes from thread
No it was not resolved. I have the latest build.
Run into the same trouble with reserved port range 50000-50059 marked with a *. Quoting this article:
"*" refers to port exclusions that were added by using the Netshell command prompt utility
So it does look like a 3rd party addon to me, as I would expect a ms daemon not to spawn an external program but to use actual syscalls (just speculation). Understanding WHAT created this reservation is beyond my knowledge.
For what is worth the netsh interface ipv4 delete excludedportrange way did the job for me.
Just experienced the same issue but with a port 3148 for mysql. Using netsh interface ipv4 delete excludedportrange protocol=tcp startport=3068 numberofports=100
gave me Access is denied.
but the ports were removed nevertheless and my mysql container started working again.
@0speedy : Did you run the command as an Administrator?
@0speedy : Did you run the command as an Administrator?
Yes I did. It still showed the error but the ports were removed anyway.
BB0297BB-C287-4F0B-A007-72B5F2D7BD72/20190102235413
Expected behavior
Be able to bind specific ports that I have always used. Be able to specify which ports docker/hyperv exclude or use, and/or I expect that docker/hyper-v actually use the ports that it is excluding and that they show up in
netstat -ano
as being used or listened on.Actual behavior
If I start a service that binds on port 50051 (it is a grpc service, and that is the traditional port used by grpc), it says:
listen tcp :50051: bind: An attempt was made to access a socket in a way forbidden by its access permissions.
Information
Steps to reproduce the behavior
My own investigation:
I was extremely confused by this problem, because I was able to bind other ports, such as 8080 or 60000, yet it did not appear that 50051 was in use by anything on my system.
Running
netstat -ano
shows nothing using 50051.Running
Get-NetTCPConnection
in powershell with admin privileges shows nothing using 50051.Even if I disconnect from the internet and disable both windows firewall and my antivirus, and run everything as admin, I still get the errors.
After hours of google searching, I found a command that showed what happened to 50051:
It seems that 50051 is excluded (whatever that means?!), even though it isn't in use by anything.
After lots of trial and error, I discovered that Docker for Windows and Hyper-V are responsible for all of those excluded port ranges above.
It also seems like all those port ranges change or increase by 1 every time I reboot, so I suppose 450 reboots from now my problem will go away, maybe...
I have never had this problem, despite using docker for years now.
I run lots of containers and setups that other people at my company work on and rely on, so it is not feasible for me to be changing the ports around on them to work around this issue. (Other people use the kube templates and docker-compose, and some of them connect with other docker-compose networks, etc, and expect things on certain ports.)
When I try to delete that excluded port range, I get this, despite running the command as administrator: