garutilorenzo / k3s-oci-cluster

Deploy a Kubernetes cluster for free, using k3s and Oracle always free resources
https://garutilorenzo.github.io/deploy-kubernetes-for-free-oracle-cloud
GNU General Public License v3.0
224 stars 79 forks source link

Cluster should be has external ip of public load balancer ? #23

Closed lucacalcaterra closed 1 year ago

lucacalcaterra commented 1 year ago

@garutilorenzo services exposed with load balancer see private ip's and not external ip (public load balancer). I suppose k3s should be initialized with node-external-ip and so on...

I'm wrong ?

image

garutilorenzo commented 1 year ago

@lucacalcaterra this is the output of what? kubectl get nodes -o wide? --node-external-ip I think is for a different use (k3s behind NAT, or with a public network interface), I found an example here.

For the services exposed by k3s, k3s uses a different kind of approach unlike k8s. You can find more info here in Service Load Balancer chapter.

And If you want to integrate k3s service with OCI LoadBalancer you have to install OCI CCM. With OCI CCM installed if you expose a service with a LoadBalancer service, then you get a public ip address for your service (the OCI CCM will create a Load Balacner for you).

The OCI CCM integration is a work in progress task, see PR #16

lucacalcaterra commented 1 year ago

@garutilorenzo this is the output of kubectl get svc -A I noticed this behaviour because i'm trying to use skupper.io for link a site and the private cluster link to a private ip which is not reachable from remote.

So probably nothing wrong with this repo and i should use the OCI CCM as you suggest.

Thanks !

lucacalcaterra commented 1 year ago

anyway, i think you should see load balancer's public ip as external addres and not the backends ip's

garutilorenzo commented 1 year ago

Dear @lucacalcaterra, the answer is no, with this module you can't see LP public IPs (not at the moment). If you want to see LB public IPs you have to use a managed K8s (OKE for Oracle cloud Infrastructure, EKS for AWS, GKE for Google). The managed Kubernetes have installed by default the respective CCM (OCI, AWS, Google), the CCM does the "magic".

This module install k3s like it was an on-prem installation, with "no CCM support" so you can't see LB public IPs. All the traffic (HTTP, HTTPS) from the internet, is redirected from the public LB (layer 4 LB) to the k3s workers where the ingress controller is listening. All the services exposed by k3s are available here:

output "public_lb_ip" {
  value = module.k3s_cluster.public_lb_ip
}

you can't see this public IP with kubectl, since there is no CCM installed. So if you want to use skupper.io whit this module, you have to expose "the skipper service" (i haven't read the docs, but I think there is a svc for this application) with the nginx ingress controller. Since skupper.io seams to be a L7 service you have all done, install skupper.io an expose with the ingress controller. The public ip address of skupper.io will be the "public_lb_ip" from terraform.

I hope is more clear now.

lucacalcaterra commented 1 year ago

Dear @lucacalcaterra, the answer is no, with this module you can't see LP public IPs (not at the moment). If you want to see LB public IPs you have to use a managed K8s (OKE for Oracle cloud Infrastructure, EKS for AWS, GKE for Google). The managed Kubernetes have installed by default the respective CCM (OCI, AWS, Google), the CCM does the "magic".

This module install k3s like it was an on-prem installation, with "no CCM support" so you can't see LB public IPs. All the traffic (HTTP, HTTPS) from the internet, is redirected from the public LB (layer 4 LB) to the k3s workers where the ingress controller is listening. All the services exposed by k3s are available here:

output "public_lb_ip" {
  value = module.k3s_cluster.public_lb_ip
}

you can't see this public IP with kubectl, since there is no CCM installed. So if you want to use skupper.io whit this module, you have to expose "the skipper service" (i haven't read the docs, but I think there is a svc for this application) with the nginx ingress controller. Since skupper.io seams to be a L7 service you have all done, install skupper.io an expose with the ingress controller. The public ip address of skupper.io will be the "public_lb_ip" from terraform.

I hope is more clear now.

Meanwhile your reply... i'll understand it... So your reply clarify all the things. thanks !