Closed ratnopamc closed 3 years ago
yes, it allows, I actually created a env without LB last week
some FYI
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha3
kind: OpenStackCluster
metadata:
name: capi-quickstart
namespace: default
spec:
controlPlaneEndpoint:
host: 172.24.4.2
port: 6443
cloudName: openstack
cloudsSecret:
name: capi-quickstart-cloud-config
namespace: default
disablePortSecurity: false
disableServerTags: true
dnsNameservers:
- 9.20.136.11
externalNetworkId: 3a916660-0559-4624-8854-9f2504b314cb
managedAPIServerLoadBalancer: false
managedSecurityGroups: true
nodeCidr: 10.6.0.0/24
useOctavia: false
apiVersion: infrastructure.cluster.x-k8s.io/v1alpha3
kind: OpenStackMachineTemplate
metadata:
name: capi-quickstart-control-plane
namespace: default
spec:
template:
spec:
floatingIP: 172.24.4.2
cloudName: openstack
cloudsSecret:
name: capi-quickstart-cloud-config
namespace: default
flavor: m1.medium
image: ubuntu2004
securityGroups:
- name: allow_ssh
kubectl --kubeconfig=./capi-quickstart.kubeconfig get nodes
NAME STATUS ROLES AGE VERSION
capi-openstack-control-plane-h6l47 Ready master 83m v1.17.3
capi-openstack-md-0-jzb69 Ready <none> 81m v1.17.3
# nova list --all-tenants
+--------------------------------------+------------------------------------+----------------------------------+--------+------------+-------------+----------------------------------------------------------------------+
| ID | Name | Tenant ID | Status | Task State | Power State | Networks |
+--------------------------------------+------------------------------------+----------------------------------+--------+------------+-------------+----------------------------------------------------------------------+
| dac80b42-22ed-4fcf-be48-bf8133e23306 | capi-openstack-control-plane-h6l47 | d9b8f960a3b74c37ad9d36a5f7126bb5 | ACTIVE | - | Running | k8s-clusterapi-cluster-default-capi-openstack=10.6.0.126, 172.24.4.2 |
| e191b8ac-363b-4337-8d17-551cbd0986ec | capi-openstack-md-0-jzb69 | d9b8f960a3b74c37ad9d36a5f7126bb5 | ACTIVE | - | Running | k8s-clusterapi-cluster-default-capi-openstack=10.6.0.204 |
+--------------------------------------+------------------------------------+----------------------------------+--------+------------+-------------+----------------------------------------------------------------------+
above output shows only 2 VMs for master/worker existing..
ok, @eratnch you might want to refer to https://github.com/kubernetes-sigs/cluster-api-provider-openstack/blob/master/templates/cluster-template-without-lb.yaml
Thanks @jichenjc .
My scenario is to use multiple masters(not 1 master) but with an external LB such as ha-proxy or metalLB.
Can I use https://github.com/kubernetes-sigs/cluster-api-provider-openstack/blob/master/templates/cluster-template-without-lb.yaml to deploy more than 1 master? I haven't tried this before. I know I can deploy > 1 worker. If this is possible then how does the load balancing happen between the API servers of the master nodes?
If this is not possible, then I guess only option is to use cluster-template.yaml to deploy >1 master nodes.
Is there any documentation available to help connect with an external LB using capo? Can you please advise.
this is some question you might ask at https://github.com/kubernetes-sigs/cluster-api I think it's not supported at cluster-api-provider-openstack now...
um.. give another thought, actually there is a param in KubeadmControlPlane
replicas: 1
,seems this is something we can adjust to create 1+ control pane
but how to make those control pane represent one cluster need further check
a quick test make replica to 3 and only got 2 master so there are some thing wrong, and basically, the requirement is to delegate the LB from internally to outside, so if we can create 3 Floating ip, they belongs to same cluster and ask external LB to connect to those 3 ,we should be good will dig more here
+--------------------------------------+------------------------------------+--------+------------+-------------+---------------------------------------------------------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+------------------------------------+--------+------------+-------------+---------------------------------------------------------------------+
| 786522ee-0c87-4df2-bb1c-326b394cc2a5 | capi-openstack-control-plane-dclp7 | ACTIVE | - | Running | k8s-clusterapi-cluster-default-capi-openstack=10.6.0.61 |
| c01b7fd0-c4a6-494b-bebb-4446f312e944 | capi-openstack-control-plane-pfxbj | ACTIVE | - | Running | k8s-clusterapi-cluster-default-capi-openstack=10.6.0.83, 172.24.4.2 |
| 06141e53-edcf-4cc1-9204-66f7c696bcc7 | capi-openstack-md-0-zdk5p | ACTIVE | - | Running | k8s-clusterapi-cluster-default-capi-openstack=10.6.0.37 |
+--------------------------------------+------------------------------------+--------+------------+-------------+---------------------------------------------------------------------+
checking https://github.com/kubernetes-sigs/cluster-api/issues/1250 for further info, seems cluster-api not support it yet so provider openstack need wait for that to be ready. .
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen
.
Mark the issue as fresh with /remove-lifecycle rotten
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta. /close
@fejta-bot: Closing this issue.
Hi, Is it possible to not use Octavia as the load balancing solution for CAPI openstack? In the cluster-template.yaml the following section mentions the load balancer ip, network and port details.
`apiVersion: infrastructure.cluster.x-k8s.io/v1alpha3 kind: OpenStackCluster metadata: name: tgtcluster namespace: default spec: apiServerLoadBalancerAdditionalPorts:
However there might be scenarios where the users who have access to an external load balancer might not want to use Octavia or any other openstack specific LB service. In those scenarios, is it possible to bypass Octavia and use any other load balancer? If yes, then how does the manifest need to be modified? Please confirm.