Closed BigBrather closed 1 month ago
@BigBrather without going too much into details, I can only tell you from my experience: we used hcloud networks in the beginning three years ago, since we started our managed kubernetes on hetzner, we had so many problems that we stopped using it. Since then, I try it from time to time, but I still have those kinds of problems.
There are so many great solutions out there that use the zero trust principle and with that approach really everything in your infrastructure is going to be much more secure because you always have to think about it from a security perspective. For example, we do not use a hetzner firewall, we only use the cilium host firewall, which makes the management much easier, single pane of declarative configuration and not the problem of misconfiguration or external issues. For internal traffic, we use things like mTLS where appropriate, so you can also get workload attestation with the right tools and a lot more visibility.
@batistein Thanks for your reply.
We abandoned the local network and will observe how CAPI works in this version. Hope this solves it our network problem.
Regarding the firewall settings on the Hetzner Cloud side, it is also clear.
ok then i will close this issue. ;)
/kind bug
What steps did you take and what happened:
I have the following configuration for control-plane node pool:
The network of my K8s Cluster, which was created using CAPI, is 10.0.0.0/16, but all the resources that I need to connect to from pods are on another network 10.81.0.0/16.
Thus, the following configuration was decided for node-pool workloads-1:
Thus, after deploying node-pool workloads-1, I attach an additional network 10.81.0.0/16 in which I have all the services that pods should have access to and everything works correctly for me in this configuration, but sometimes this network falls off 10.81.0.0/16 on node pool workloads-1 and restoring this network helps only by deleting the server and re-creating it using CAPI. Previously, these network failures happened rarely, but now they are common.
What did you expect to happen:
I expected that after these manipulations I would get access to the network 10.81.0.0/16 and this access to the network would be stable, but in practice, the network constantly falls off, for various reasons, ping does not work or this interface goes down.
Anything else you would like to add:
I would also like to understand whether it is possible to do such manipulations as I did above in these configurations, with an additional network attached to workloads-1.
I'm also interested in whether I can use the following configuration for the control plane:
That is, in this configuration, I specified two parameters
cidrBlock: "10.81.0.0/16"
andsubnetCidrBlock: "10.81.0.0/24"
that are already in Hetzner Cloud.Is it possible to add an existing network in this way to a K8s Cluster that is deployed using CAPI or is it necessary to specify a resource ID?
Perhaps such a configuration with the addition of an existing network would help solve my problem.
Environment:
kubectl version
): v1.28.7/etc/os-release
): ubuntu-22.04