kubernetes-sigs / cluster-api-provider-openstack

Cluster API implementation for OpenStack
https://cluster-api-openstack.sigs.k8s.io/
Apache License 2.0
286 stars 252 forks source link

Does CAPI Openstack support bypassing Octavia LB #584

Closed ratnopamc closed 3 years ago

ratnopamc commented 4 years ago

Hi, Is it possible to not use Octavia as the load balancing solution for CAPI openstack? In the cluster-template.yaml the following section mentions the load balancer ip, network and port details.

`apiVersion: infrastructure.cluster.x-k8s.io/v1alpha3 kind: OpenStackCluster metadata: name: tgtcluster namespace: default spec: apiServerLoadBalancerAdditionalPorts:

However there might be scenarios where the users who have access to an external load balancer might not want to use Octavia or any other openstack specific LB service. In those scenarios, is it possible to bypass Octavia and use any other load balancer? If yes, then how does the manifest need to be modified? Please confirm.

jichenjc commented 4 years ago

yes, it allows, I actually created a env without LB last week

jichenjc commented 4 years ago

some FYI

apiVersion: infrastructure.cluster.x-k8s.io/v1alpha3
kind: OpenStackCluster
metadata:
  name: capi-quickstart
  namespace: default
spec:
  controlPlaneEndpoint:
    host: 172.24.4.2
    port: 6443
  cloudName: openstack
  cloudsSecret:
    name: capi-quickstart-cloud-config
    namespace: default
  disablePortSecurity: false
  disableServerTags: true
  dnsNameservers:
  - 9.20.136.11
  externalNetworkId: 3a916660-0559-4624-8854-9f2504b314cb
  managedAPIServerLoadBalancer: false
  managedSecurityGroups: true
  nodeCidr: 10.6.0.0/24
  useOctavia: false
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha3
kind: OpenStackMachineTemplate
metadata:
  name: capi-quickstart-control-plane
  namespace: default
spec:
  template:
    spec:
      floatingIP: 172.24.4.2
      cloudName: openstack
      cloudsSecret:
        name: capi-quickstart-cloud-config
        namespace: default
      flavor: m1.medium
      image: ubuntu2004
      securityGroups:
      - name: allow_ssh
jichenjc commented 4 years ago
kubectl --kubeconfig=./capi-quickstart.kubeconfig get nodes
NAME                                 STATUS   ROLES    AGE   VERSION
capi-openstack-control-plane-h6l47   Ready    master   83m   v1.17.3
capi-openstack-md-0-jzb69            Ready    <none>   81m   v1.17.3
# nova list --all-tenants
+--------------------------------------+------------------------------------+----------------------------------+--------+------------+-------------+----------------------------------------------------------------------+
| ID                                   | Name                               | Tenant ID                        | Status | Task State | Power State | Networks                                                             |
+--------------------------------------+------------------------------------+----------------------------------+--------+------------+-------------+----------------------------------------------------------------------+
| dac80b42-22ed-4fcf-be48-bf8133e23306 | capi-openstack-control-plane-h6l47 | d9b8f960a3b74c37ad9d36a5f7126bb5 | ACTIVE | -          | Running     | k8s-clusterapi-cluster-default-capi-openstack=10.6.0.126, 172.24.4.2 |
| e191b8ac-363b-4337-8d17-551cbd0986ec | capi-openstack-md-0-jzb69          | d9b8f960a3b74c37ad9d36a5f7126bb5 | ACTIVE | -          | Running     | k8s-clusterapi-cluster-default-capi-openstack=10.6.0.204             |
+--------------------------------------+------------------------------------+----------------------------------+--------+------------+-------------+----------------------------------------------------------------------+

above output shows only 2 VMs for master/worker existing..

jichenjc commented 4 years ago

ok, @eratnch you might want to refer to https://github.com/kubernetes-sigs/cluster-api-provider-openstack/blob/master/templates/cluster-template-without-lb.yaml

ratnopamc commented 4 years ago

Thanks @jichenjc .

My scenario is to use multiple masters(not 1 master) but with an external LB such as ha-proxy or metalLB.

Can I use https://github.com/kubernetes-sigs/cluster-api-provider-openstack/blob/master/templates/cluster-template-without-lb.yaml to deploy more than 1 master? I haven't tried this before. I know I can deploy > 1 worker. If this is possible then how does the load balancing happen between the API servers of the master nodes?

If this is not possible, then I guess only option is to use cluster-template.yaml to deploy >1 master nodes.

Is there any documentation available to help connect with an external LB using capo? Can you please advise.

jichenjc commented 4 years ago

this is some question you might ask at https://github.com/kubernetes-sigs/cluster-api I think it's not supported at cluster-api-provider-openstack now...

jichenjc commented 4 years ago

um.. give another thought, actually there is a param in KubeadmControlPlane replicas: 1 ,seems this is something we can adjust to create 1+ control pane but how to make those control pane represent one cluster need further check

jichenjc commented 4 years ago

a quick test make replica to 3 and only got 2 master so there are some thing wrong, and basically, the requirement is to delegate the LB from internally to outside, so if we can create 3 Floating ip, they belongs to same cluster and ask external LB to connect to those 3 ,we should be good will dig more here

+--------------------------------------+------------------------------------+--------+------------+-------------+---------------------------------------------------------------------+
| ID                                   | Name                               | Status | Task State | Power State | Networks                                                            |
+--------------------------------------+------------------------------------+--------+------------+-------------+---------------------------------------------------------------------+
| 786522ee-0c87-4df2-bb1c-326b394cc2a5 | capi-openstack-control-plane-dclp7 | ACTIVE | -          | Running     | k8s-clusterapi-cluster-default-capi-openstack=10.6.0.61             |
| c01b7fd0-c4a6-494b-bebb-4446f312e944 | capi-openstack-control-plane-pfxbj | ACTIVE | -          | Running     | k8s-clusterapi-cluster-default-capi-openstack=10.6.0.83, 172.24.4.2 |
| 06141e53-edcf-4cc1-9204-66f7c696bcc7 | capi-openstack-md-0-zdk5p          | ACTIVE | -          | Running     | k8s-clusterapi-cluster-default-capi-openstack=10.6.0.37             |
+--------------------------------------+------------------------------------+--------+------------+-------------+---------------------------------------------------------------------+
jichenjc commented 4 years ago

checking https://github.com/kubernetes-sigs/cluster-api/issues/1250 for further info, seems cluster-api not support it yet so provider openstack need wait for that to be ready. .

fejta-bot commented 3 years ago

Issues go stale after 90d of inactivity. Mark the issue as fresh with /remove-lifecycle stale. Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale

fejta-bot commented 3 years ago

Stale issues rot after 30d of inactivity. Mark the issue as fresh with /remove-lifecycle rotten. Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten

fejta-bot commented 3 years ago

Rotten issues close after 30d of inactivity. Reopen the issue with /reopen. Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /close

k8s-ci-robot commented 3 years ago

@fejta-bot: Closing this issue.

In response to [this](https://github.com/kubernetes-sigs/cluster-api-provider-openstack/issues/584#issuecomment-747193515): >Rotten issues close after 30d of inactivity. >Reopen the issue with `/reopen`. >Mark the issue as fresh with `/remove-lifecycle rotten`. > >Send feedback to sig-testing, kubernetes/test-infra and/or [fejta](https://github.com/fejta). >/close Instructions for interacting with me using PR comments are available [here](https://git.k8s.io/community/contributors/guide/pull-requests.md). If you have questions or suggestions related to my behavior, please file an issue against the [kubernetes/test-infra](https://github.com/kubernetes/test-infra/issues/new?title=Prow%20issue:) repository.