F5Networks / k8s-bigip-ctlr

Repository for F5 Container Ingress Services for Kubernetes & OpenShift.
Apache License 2.0
355 stars 193 forks source link

Clouddocs CRD nodeport sample does not work #1531

Closed gwolfis closed 3 years ago

gwolfis commented 3 years ago

Setup Details

CIS Version : 2.1.1
Build: f5networks/k8s-bigip-ctlr:latest
BIGIP Version: Big IP 15.0.1 AS3 Version: 3.20
Agent Mode: AS3/CRD
Orchestration: K8S/OSCP
Orchestration Version: k8s 1.19 Pool Mode:Nodeport
Additional Setup details: <Platform/CNI Plugins/ cluster nodes/ etc>

Description

The mentioned sample at clouddocs: https://clouddocs.f5.com/containers/latest/userguide/crd.html#examples-repository does not work. First the YAML isn't syntactically correct using apiVersion: extensions/v1beta1 instead of app/v1. missing a comma in the args section.

when kubectl create -f sample-nodeport-k8s-bigip-ctlr-crd-secret.yml

the following errors occur: error: error validating "sample-nodeport-k8s-bigip-ctlr-crd-secret.yml": error validating data: [ValidationError(Deployment.spec.template): unknown field "containers" in io.k8s.api.core.v1.PodTemplateSpec, ValidationError(Deployment.spec.template): unknown field "imagePullSecrets" in io.k8s.api.core.v1.PodTemplateSpec, ValidationError(Deployment.spec.template.metadata): unknown field "app" in io.k8s.apimachinery.pkg.apis.meta.v1.ObjectMeta, ValidationError(Deployment.spec.template): unknown field "serviceAccountName" in io.k8s.api.core.v1.PodTemplateSpec, ValidationError(Deployment.spec): missing required field "selector" in io.k8s.api.apps.v1.DeploymentSpec]; if you choose to ignore these errors, turn validation off with --validate=false

Steps To Reproduce

1) Try to build CRD based on the mentioned steps in clouddocs and you will expirience the same errors 2) 3)

Expected Result

It should bring the cis controller up supporting CRDs

when you give an example with defined steps, my expectation is that after submitting the last step I should have a working example.

Actual Result

Diagnostic Information

<Configuration files, error messages, logs>
Note: Sanitize the data. For example, be mindful of IPs, ports, application names and URLs
Note: The following F5 article outlines the information required when opening an issue.
https://support.f5.com/csp/article/K60974137

Observations (if any)

mdditt2000 commented 3 years ago

@gwolfis i am going to get the repo fixed. I just setup CRDs with NodePort and it works well. Please use my repo https://github.com/mdditt2000/kubernetes-1-19/tree/master/cis%202.2/crd/big-ip-60-nodeport

Next i want to add node selector and will update the link above.

gwolfis commented 3 years ago

Yes, using your repo and this works. Only minor is the pool which needs to get added manually. But okay for testing.

Kind regards,

Gert

mdditt2000 commented 3 years ago

@gwolfis not sure i understand why the pools need to be added manually. That must be something on your side. I will reach out to you via F5 email.

mdditt2000 commented 3 years ago

label create f5role=worker added to nodes 2-4

[kube@k8s-1-19-master 1531]$ kubectl get nodes --show-labels NAME STATUS ROLES AGE VERSION LABELS k8s-1-19-master.lab.com Ready 33d v1.19.0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-1-19-master.lab..com,kubernetes.io/os=linux k8s-1-19-node1.lab.com Ready 33d v1.19.0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-1-19-node1.lab.com,kubernetes.io/os=linux k8s-1-19-node2.lab.com Ready 33d v1.19.0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,f5role=worker,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-1-19-node2.lab.com,kubernetes.io/os=linux k8s-1-19-node3.lab.com Ready 33d v1.19.0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,f5role=worker,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-1-19-node3.lab.com,kubernetes.io/os=linux k8s-1-19-node4.lab.com Ready 33d v1.19.0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,f5role=worker,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-1-19-node4.lab.com,kubernetes.io/os=linux

[kube@k8s-1-19-master 1531]$ kubectl create -f virtual-server-80.yaml virtualserver.cis.f5.com/f5-demo created [kube@k8s-1-19-master 1531]$

Define the label in the CRD virtual and you done. No global setting in the CIS deployment is required.

apiVersion: "cis.f5.com/v1" kind: VirtualServer metadata: name: f5-demo labels: f5cr: "true" spec: virtualServerAddress: "10.192.75.108" host: mysite.f5demo.com pools:

image

Remove the label and I see all 5 nodes

image

Files can be located at https://github.com/mdditt2000/kubernetes-1-19/tree/master/cis%202.2/github/1531