Open avin3sh opened 4 months ago
+@jsturtevant who saw this in https://github.com/kubernetes/test-infra/pull/33042
@grcusanz / @kestratt have either of you seen this Issue popping up for you?
I'm having issues with my AKS testing with latest images. it looks to be related
Folks over Calico re-pointed to this issue suggesting the issue isn't at Calico's end (https://github.com/projectcalico/calico/issues/9019#issuecomment-2248938484).
Currently our workers can't have up to date Security patches because of this. I noticed the ADO label so I am hoping we will have some update soon 🤞
We are having this exact same issue with our windows deployments which are using mcr.microsoft.com/dotnet/framework/aspnet:4.8.1-windowsservercore-ltsc2022.
Any update or suggestion on a fix would be greatly appreciated.
@kysu1313 Do you have the Windows patch KB5040437 installed too?
I do have same issue. After uninstaling KB5040437 - network conectivity is established.
@ntrappe-msft Can you confirm if we can expect a fix in August CUs ? We have been holding off upgrading to July CU but leaving our cluster unpatched for two or more months consequently has security concerns.
@avin3sh We're getting this assigned to an engineer right now. Once we do that, they can inform everyone of what the timeline looks like.
3 weeks went, and we’re not only have a fix for a bug, that making windows containers unusable, but don’t even have a timeline. That looks very strange
@Nova-Logic Sorry for the delay, we know this is a big blocker. We've switched it to a new engineer and should have an update to provide next week.
@avin3sh how are you uninstalling KB5040437? I received this error when attempting to uninstall via wusa /uninstall /kb:5040437 /norestart
:
"Security Update for Microsoft Windows (KB5040437) is required by your computer and cannot be uninstalled."
I also have the General failure ping errors on a fully updated Windows Server 2019 as well.. Calico seems completely broken for Windows in general right now..
No mention of this issue in today's patches. I am guessing this was not addressed ?
How is this still not fixed? We can't update any of our windows nodes as the patch can't even be uninstalled..
I just tried and can confirm the August patch / KB5041160 / does not fix the issue. The patch contains Important CVEs which leaves our cluster potentially vulnerable if not patched. @ntrappe-msft I appreciate an engineer is already assigned this issue but is it possible for us to get some update on the fix ?
We are coming to the end of another week, can we please have the update we were promised
We've switched it to a new engineer and should have an update to provide next week.
Unfortunately, I don't have news to share yet of a fix. We're waiting on a response from the engineer assigned. We'll bump this Issue up in priority.
Any update? At least a rough estimate / schedule? Currently k8s windows container network is simply broken and not usable. We soon are forced to terminate all our windows nodes as we can't patch them anymore due to this issue.
We are a large customer of Windows Containers and are deeply concerned that this issue remains unresolved.
Neither the July nor August security updates even acknowledge this issue under the "Known issues in this update" section.
We are curious what criteria a Containers issue must meet to warrant expedited support and official mention in monthly updates. Does "everything about container networking is broken after July" not meet these criteria?
The support on this problem so far has raised several internal questions about stability of Windows Containers as a platform. The way Microsoft handles this problem will dictate how seriously we would be able to take Windows Containers for any initiatives going forward.
It's really sad, but I believe we should admit this: 1)Since fix still not available it seems Microsoft don't have sufficient resources to support it and to continue it's development 2)Windows containers are not and would not be a production-grade solution. Release of that CU's that broke container networking is the clear evidence that Microsoft just had not tested that CU with windows containers(or not tested it properly just relying on the fact that if container started—all is ok) 3)Those, who relied on it should migrate to powershell dsc/terraform/both due to p2
It's hard to ruin product reputation more than Microsoft did — release the CU that broke container networking and then just ghost the customers, for more than a month. MS even didn't bothered (or it's possible that actually MS still didn't fully aware of the problem) to write about the issues in known problems.
We(I mean community) can try to check if Microsoft cares about this product by spreading that insane story everywhere across dev/devops/tech bloggers and look at MS reaction.
As we head into another week, do we have any new update ? As we inch closer to next month's patches, the growing uncertainty about the fix means we will have to force the hosts to update anyway and look at some alternative for hosting the workloads - can't leave the Windows workers unpatched for three months in a row.
All of this tedious, extra work can be avoided or at least planned better if there is some transparency on how Windows Containers team is planning to tackle this issue.
If this issue is affecting even the official sig-windows Kubernetes e2e tests, not prioritizing this problem paints a very bad picture of Windows Containers as a product, for both existing and future potential customers.
I tried some experimentation with Docker Swarm with overlay networking but couldn't reproduce this specific scenario, which seems to suggest the issue might be specific to encapsulation mode or ACLs on HNS Endpoints -- but again my guess as is as good as anyone else's and without some insights into the issue from the product team, it is difficult to even think of a workaround.
27 August, still no fix
I apologize for my ignorance, but I'd really appreciate if someone here in the community can clarify the nature and scope of this issue for me.
My understanding from the thread above is that Microsoft's July update for Windows Server 2022 has somehow borked networking for Windows pods/containers deployed to Kubernetes nodes running that version of Windows Server. However, do we know the extent to which the various local/cloud flavours of Kubernetes environment(s) might affected? For example, has anyone observed this same behaviour when using the latest versions of the Amazon "Kubernetes optimized AMIs" in EKS, or similar counterparts in AKS?
As for what might be causing the issue, I wonder if there is a potential for some underlying dependency issue with the [versions of the] tools used to build the Windows container images themselves? For example, the version/patching of the Windows base image that the container is built from?
Regardless, the apparent lack of any cogent response from Microsoft is it's definitely... disquieting.
@jwilsonCX yes we're using the aws optimized eks images, same issue. Although we're not using the Amazon CNI but rather calico which uses the windows HNS features
@jwilsonCX I've bare Kubernetes deployed in hyper-v VMs with nested virtualization. Using calico+vxlan. Have cluster containing 3 master , 3 linux worker,2 win worker nodes.on one of the nodes(looks randomly) containers does not having network(Ping transmit: general failure).Seems it's somehow should be related to HNS. Also I had tried to use both old and after-patch build-servers, and older/newer images, but that had not helped
Thanks for those replies, @davidgiga1993 and @Nova-Logic. We're running Windows containers in EKS, but are using the Amazon CNI. I've been holding off making any changes/updates since this ticket was opened because I'm afraid of downing our working (quasi-production) cluster. Was really hoping for more clarity from MS as to what the heck is going on before submitting ourselves as guinea pigs.
Hi All, we are aware of this issue and are actively working to track down the root cause. I'll report back on this thread before the end of this week, or sooner if I get actionable information to share.
Hi @grcusanz, are you in a position to better describe the exact nature and scope of the problem as you understand it at this time? For example, is it limited to HNS implementations as some have posited above, or is CNI impacted too?
Hi everyone, please follow these steps and comment to let me know if it resolves the issue with the July or August update installed.
Name : FwPerfImprovementChange Type : DWORD Value : 0
CAUTION! Network connectivity will be lost to all containers on the node during an HNS restart! Container networking should automatically recover. Please report back if you have a different experience.
@JamesKehr at this moment looks like it helped, would continue testing on this weekend and post follow-up on Monday
@JamesKehr at this moment looks like it helped, would continue testing on this weekend and post follow-up on Monday
Thank you for the confirmation, @Nova-Logic! Please let me know if the status changes.
Thanks James for identifying and sharing the workaround! The initial fix that caused this was implemented to resolve a customer issue with Calico network policy at scale. It shipped in April, disabled by default in Windows, but was enabled by default for AKS nodes. There were no issues that we were aware of with this fix in AKS. Following our standard process, this then became enabled in Windows by default in July. James's workaround is the first step, we're now investigating the root cause of why this fix broke networking in July and will report back here when we have more info, and again when we have a permanent fix available.
Thanks for sharing the background @grcusanz.
There were no issues that we were aware of with this fix in AKS. Following our standard process, this then became enabled in Windows by default in July
This seem to suggest there are missing gaps somewhere in the test/release process. Given the scale of effect a simple change like this had, would the team be open to cover all various common configurations mentioned over this issue, since these seem to be popular with Windows Containers customers aside the standard AKS setup with Azure CNI - it looks like networking tests covering Calico VXLAN/overlay may have helped identify this problem early on and prevented the change going into monthly patches.
Hi everyone, please follow these steps and comment to let me know if it resolves the issue with the July or August update installed.
1. Open regedit (Registry Editor). 2. Go to: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\hns\State 3. Add or update the following value to the State key:
Name : FwPerfImprovementChange Type : DWORD Value : 0
4. Reboot [required]. 5. Test
Can I do this before updating, or will this be overwritten by the update?
Can I do this before updating, or will this be overwritten by the update?
As the key does not exist after updating, I strongly guess it will not be overwritten. So yes, I guess.
But to be sure, just check after updating if it still is 0 🤷
@doctorpangloss you can safely add the registry value prior to updating. The default value applies only when the registry value is not present. A present reg value will always take precedence over the default value.
@wech71 Spot on!
I updated the steps to include a no reboot option. The registry value is read during the start of the HNS service. Restarting the HNS service will cause the reg value change to be read and container networking will be rebuilt.
CAUTION! Network connectivity will be lost to all containers on the node during the HNS restart! Container networking should automatically recover. Please report back if you have a different experience.
Thank you @JamesKehr for the workaround. If a Powershell equivalent is helpful to others, here it is (note that the first command may fail if the key already exists, but with no harm). Of course, the third command will forcibly reboot the computer.
New-Item -Path 'HKLM:\SYSTEM\CurrentControlSet\Services\hns\State'
New-ItemProperty -Path 'HKLM:\SYSTEM\CurrentControlSet\Services\hns\State' -Name 'FwPerfImprovementChange' -Value 0 -PropertyType DWord
Restart-Computer -Force
@aaabdallah Thank you for confirmation and the PowerShell commands!
Hello, I've apllied your registry fix and can cofirm it worked until worker nodes crashed (due technical problems, not kubernetes related) and all started pods had network unaccesible again, had to drain node and reboot to fix. I can confirm, that after dirty reboot of worker node (without drain first), network gets broken on pods running on that node. (unless they are terminated and started on another node by cluster). Drain + reboot + uncordon fixes it. Registry stays set.
Hello, Affected too, the workaround helped, looking forward to a definitive fix.
October CU confirms that this is fixed:
[Containers (known issue)] Fixed: Container networking on Kubernetes might not work as you expect. Containers fail to reach external networks or communicate between pods. It might affect you when you use Calico to set up container networking on development or production instances. If affected, containers will not connect to the internet. The host’s firewall also blocks network traffic. When you ping external addresses, like ‘microsoft.com,’ you might get a general failure error message.
Is it safe to not proactively apply the workaround when adding new worker nodes or rebuilding existing ones ?
Is it safe to not proactively apply the workaround when adding new worker nodes or rebuilding existing ones ?
Hi, thanks for asking a follow-up question. We're currently waiting on a response from the responsible team.
Describe the bug Pod networking breaks after installing the July CU on Windows Server 2022. For eg,
ping microsoft.com
from within the container returnsGeneral failure
. The pod is not reachable from the other pods or through a Service.Uninstalling
KB5040437
fixes the issue.To Reproduce
Expected behavior
The pod should be able to reach to external network as well should be reachable from other pods
Configuration:
/label Windows on Kubernetes