Closed jonrober closed 1 year ago
@jonrober - Please share your CIS configuration and TS YAML info.
CIS helm chart:
bigip_login_secret: bigip-login
rbac:
create: true
serviceAccount:
create: true
# This namespace is where the Controller lives;
namespace: kube-system
ingressClass:
create: true
ingressClassName: f5
defaultController: true
args:
# See http://clouddocs.f5.com/products/connectors/k8s-bigip-ctlr/latest/#controller-configuration-parameters
# NOTE: helm has difficulty with values using `-`; `_` are used for naming
# and are replaced with `-` during rendering.
# REQUIRED Params
bigip_url: 'k8s-lb.xxxx'
bigip_partition: 'k8s-sul-dlss'
pool_member_type: 'nodeport'
insecure: true
log_level: 'DEBUG'
custom_resource_mode: true
use_node_internal: true
image:
user: f5networks
repo: k8s-bigip-ctlr
pullPolicy: Always
version: 2.7.1
securityContext:
runAsUser: 1000
runAsGroup: 3000
fsGroup: 2000
limits_cpu: 100m
limits_memory: 512Mi
requests_cpu: 100m
requests_memory: 512Mi
Transport:
apiVersion: "cis.f5.com/v1"
kind: TransportServer
metadata:
name: transport-server
labels:
f5cr: "true"
spec:
virtualServerAddress: "171.67.44.198"
virtualServerName: "test-transport"
virtualServerPort: 80
mode: standard
snat: auto
pool:
service: example-service
servicePort: 8080
monitor:
type: tcp
interval: 10
timeout: 10```
@jonrober - Please share CIS configuration generated by helm chart?
# Source: f5-bigip-ctlr/templates/f5-bigip-ctlr-serviceaccount.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: f5-f5-bigip-ctlr
namespace: kube-system
labels:
app.kubernetes.io/instance: f5
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: f5-bigip-ctlr
app: f5-bigip-ctlr
chart: f5-bigip-ctlr-0.0.18
release: f5
heritage: Helm
---
# Source: f5-bigip-ctlr/templates/f5-bigip-ctlr-clusterrole.yaml
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: f5-f5-bigip-ctlr
labels:
app.kubernetes.io/instance: f5
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: f5-bigip-ctlr
app: f5-bigip-ctlr
chart: f5-bigip-ctlr-0.0.18
release: f5
heritage: Helm
rules:
- verbs:
- get
- list
- watch
apiGroups:
- ''
- apps
- extensions
- route.openshift.io
- networking.k8s.io
resources:
- nodes
- services
- endpoints
- namespaces
- ingresses
- ingressclasses
- secrets
- pods
- routes
- verbs:
- get
- list
- watch
- update
- create
- patch
apiGroups:
- ''
- apps
- extensions
- route.openshift.io
- networking.k8s.io
resources:
- configmaps
- events
- ingresses/status
- routes/status
- services/status
- verbs:
- get
- list
- watch
- update
- patch
apiGroups:
- cis.f5.com
resources:
- virtualservers
- tlsprofiles
- transportservers
- externaldnses
- ingresslinks
- transportservers/status
- virtualservers/status
- ingresslinks/status
- policies
---
# Source: f5-bigip-ctlr/templates/f5-bigip-ctlr-clusterrolebinding.yaml
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: f5-f5-bigip-ctlr
namespace: kube-system
labels:
app.kubernetes.io/instance: f5
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: f5-bigip-ctlr
app: f5-bigip-ctlr
chart: f5-bigip-ctlr-0.0.18
release: f5
heritage: Helm
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: f5-f5-bigip-ctlr
subjects:
- kind: ServiceAccount
name: f5-f5-bigip-ctlr
namespace: kube-system
---
# Source: f5-bigip-ctlr/templates/f5-bigip-ctlr-deploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: f5-f5-bigip-ctlr
namespace: kube-system
labels:
app.kubernetes.io/instance: f5
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: f5-bigip-ctlr
app: f5-bigip-ctlr
chart: f5-bigip-ctlr-0.0.18
release: f5
heritage: Helm
spec:
replicas: 1
selector:
matchLabels:
app: f5-bigip-ctlr
template:
metadata:
labels:
app.kubernetes.io/instance: f5
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: f5-bigip-ctlr
app: f5-bigip-ctlr
release: f5
spec:
serviceAccountName: f5-f5-bigip-ctlr
securityContext:
runAsUser: 1000
runAsGroup: 3000
fsGroup: 2000
containers:
- name: f5-bigip-ctlr
image: "f5networks/k8s-bigip-ctlr:2.7.1"
livenessProbe:
failureThreshold: 3
httpGet:
path: /health
port: 8080
scheme: HTTP
initialDelaySeconds: 15
periodSeconds: 15
successThreshold: 1
timeoutSeconds: 15
readinessProbe:
failureThreshold: 3
httpGet:
path: /health
port: 8080
scheme: HTTP
initialDelaySeconds: 30
periodSeconds: 30
successThreshold: 1
timeoutSeconds: 15
volumeMounts:
- name: bigip-creds
mountPath: "/tmp/creds"
readOnly: true
imagePullPolicy: Always
command:
- /app/bin/k8s-bigip-ctlr
args:
- --credentials-directory
- /tmp/creds
- --bigip-partition=k8s-sul-dlss
- --bigip-url=k8s-lb.xxxx.
- --custom-resource-mode=true
- --insecure=true
- --log-level=DEBUG
- --pool-member-type=nodeport
- --use-node-internal=true
resources:
limits:
cpu: 100m
memory: 512Mi
requests:
cpu: 100m
memory: 512Mi
volumes:
- name: bigip-creds
secret:
secretName: bigip-login
---
# Source: f5-bigip-ctlr/templates/f5-bigip-ctlr-ingress-class.yaml
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
name: f5
annotations:
ingressclass.kubernetes.io/is-default-class: "false"
spec:
controller: f5.com/cntr-ingress-svcs
@jonrober i reach out to your colleague to get your CRD VS configuration. Please can you send that to @trinaths and I
Did you mean this or something else?
kind: VirtualServer
metadata:
name: k8s-hw
labels:
f5cr: "true"
spec:
host: sul-k8s-f5-hw.stanford.edu
virtualServerAddress: "171.67.44.197"
virtualServerName: "sul-hello-world"
pools:
- path: /
service: hello-kubernetes-hello-world
servicePort: 80
monitor:
type: http
send: “GET /rn”
recv: ""
interval: 10
timeout: 10
Ah, or did you mean the CRD itself?
kind: CustomResourceDefinition
metadata:
creationTimestamp: "2022-03-22T22:22:57Z"
generation: 1
labels:
app.kubernetes.io/instance: f5-bigip-ctlr
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: f5-bigip-ctlr
name: virtualservers.cis.f5.com
resourceVersion: "2940947"
uid: 15128003-acd9-4867-85e3-442e0c2e3232
spec:
conversion:
strategy: None
group: cis.f5.com
names:
kind: VirtualServer
listKind: VirtualServerList
plural: virtualservers
shortNames:
- vs
singular: virtualserver
scope: Namespaced
versions:
- additionalPrinterColumns:
- description: hostname
jsonPath: .spec.host
name: host
type: string
- description: TLS Profile attached
jsonPath: .spec.tlsProfileName
name: tlsProfileName
type: string
- description: Http Traffic Termination
jsonPath: .spec.httpTraffic
name: httpTraffic
type: string
- description: IP address of virtualServer
jsonPath: .spec.virtualServerAddress
name: IPAddress
type: string
- description: ipamLabel for virtual server
jsonPath: .spec.ipamLabel
name: ipamLabel
type: string
- description: IP address of virtualServer
jsonPath: .status.vsAddress
name: IPAMVSAddress
type: string
- description: status of VirtualServer
jsonPath: .status.status
name: STATUS
type: string
- jsonPath: .metadata.creationTimestamp
name: Age
type: date
name: v1
schema:
openAPIV3Schema:
properties:
spec:
properties:
allowVlans:
items:
pattern: ^\/([A-z0-9-_+]+\/)*([A-z0-9-_]+\/?)*$
type: string
type: array
host:
pattern: ^(([a-zA-Z0-9\*]|[a-zA-Z0-9][a-zA-Z0-9\-]*[a-zA-Z0-9])\.)*([A-Za-z0-9]|[A-Za-z0-9][A-Za-z0-9\-]*[A-Za-z0-9])$
type: string
hostGroup:
pattern: ^([A-z0-9-_+])*([A-z0-9])$
type: string
httpTraffic:
type: string
iRules:
items:
type: string
type: array
ipamLabel:
type: string
policyName:
pattern: ^([A-z0-9-_+])*([A-z0-9])$
type: string
pools:
items:
properties:
monitor:
properties:
interval:
type: integer
recv:
type: string
send:
type: string
timeout:
type: integer
type:
enum:
- http
- https
type: string
required:
- type
- send
- interval
type: object
nodeMemberLabel:
pattern: ^[a-zA-Z0-9][-A-Za-z0-9_.\/]{0,61}[a-zA-Z0-9]=[a-zA-Z0-9][-A-Za-z0-9_.]{0,61}[a-zA-Z0-9]$
type: string
path:
pattern: ^\/([A-z0-9-_+]+\/)*([A-z0-9]+\/?)*$
type: string
rewrite:
pattern: ^\/([A-z0-9-_+]+\/)*([A-z0-9]+\/?)*$
type: string
service:
pattern: ^([A-z0-9-_+])*([A-z0-9])$
type: string
servicePort:
maximum: 65535
minimum: 1
type: integer
type: object
type: array
rewriteAppRoot:
pattern: ^\/([A-z0-9-_+]+\/)*([A-z0-9]+\/?)*$
type: string
serviceAddress:
items:
properties:
arpEnabled:
type: boolean
icmpEcho:
enum:
- enable
- disable
- selective
type: string
routeAdvertisement:
enum:
- enable
- disable
- selective
- always
- any
- all
type: string
spanningEnabled:
type: boolean
trafficGroup:
pattern: ^\/([A-z0-9-_+]+\/)*([A-z0-9]+\/?)*$
type: string
type: object
maxItems: 1
type: array
snat:
type: string
tlsProfileName:
type: string
virtualServerAddress:
pattern: ^(([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])\.){3}([0-9]|[1-9][0-9]|1[0-9]{2}|2[0-4][0-9]|25[0-5])|(([0-9a-fA-F]{1,4}:){7,7}[0-9a-fA-F]{1,4}|([0-9a-fA-F]{1,4}:){1,7}:|([0-9a-fA-F]{1,4}:){1,6}:[0-9a-fA-F]{1,4}|([0-9a-fA-F]{1,4}:){1,5}(:[0-9a-fA-F]{1,4}){1,2}|([0-9a-fA-F]{1,4}:){1,4}(:[0-9a-fA-F]{1,4}){1,3}|([0-9a-fA-F]{1,4}:){1,3}(:[0-9a-fA-F]{1,4}){1,4}|([0-9a-fA-F]{1,4}:){1,2}(:[0-9a-fA-F]{1,4}){1,5}|[0-9a-fA-F]{1,4}:((:[0-9a-fA-F]{1,4}){1,6})|:((:[0-9a-fA-F]{1,4}){1,7}|:)|fe80:(:[0-9a-fA-F]{0,4}){0,4}%[0-9a-zA-Z]{1,}|::(ffff(:0{1,4}){0,1}:){0,1}((25[0-5]|(2[0-4]|1{0,1}[0-9]){0,1}[0-9])\.){3,3}(25[0-5]|(2[0-4]|1{0,1}[0-9]){0,1}[0-9])|([0-9a-fA-F]{1,4}:){1,4}:((25[0-5]|(2[0-4]|1{0,1}[0-9]){0,1}[0-9])\.){3,3}(25[0-5]|(2[0-4]|1{0,1}[0-9]){0,1}[0-9]))$
type: string
virtualServerHTTPPort:
maximum: 65535
minimum: 1
type: integer
virtualServerHTTPSPort:
maximum: 65535
minimum: 1
type: integer
virtualServerName:
pattern: ^([A-z0-9-_+])*([A-z0-9])$
type: string
waf:
pattern: ^\/([A-z0-9-_+]+\/)*([A-z0-9]+\/?)*$
type: string
type: object
status:
properties:
status:
default: Pending
type: string
vsAddress:
default: None
type: string
type: object
type: object
served: true
storage: true
subresources:
status: {}
status:
acceptedNames:
kind: VirtualServer
listKind: VirtualServerList
plural: virtualservers
shortNames:
- vs
singular: virtualserver
conditions:
- lastTransitionTime: "2022-03-22T22:22:57Z"
message: no conflicts found
reason: NoConflicts
status: "True"
type: NamesAccepted
- lastTransitionTime: "2022-03-22T22:22:57Z"
message: the initial names have been accepted
reason: InitialNamesAccepted
status: "True"
type: Established
storedVersions:
- v1```
@mdditt2000 - Hey I want to report that we are also experiencing some of the same symptoms of this issue right now as well. Specifically "virtual server is set up and mostly looks correct, but it has no default pool set" - Our default pool is not being set either.
We are on 2.9.1
Can we get some attention on this? - Let us know what you need. A new ticket?
@jonrober Please share the below info for further investigation of this issue.
$> kubectl get po,deploy,svc,ep -n <namespace-where-VS-TS-and-svc/deployment-are-created>
CIS logs with VS and TS crd configured. We want to check the difference as mentioned in the issue.
@glermaidt Please share above info and logs.
@jonrober @glermaidt - a similar issue and info at https://github.com/F5Networks/k8s-bigip-ctlr/issues/2377
Sorry for the delay, was out last week.
The pool vs policy issue in #2377 is what I'm seeing as well for that part, so it's good to know it's not a blocker.
For the nodes, I can't replicate TS working either now. Current state on both TS and VS is that everything looks correct up to the fact that no nodes are added. There've been a number of general changes to the cluster in the past few months and I'm trying to get back to a state where at least TS works for comparison.
@jonrober - Unable to reproduce this issue. Please share the requested data.
Okay, had time to set things back up again and try with full attention and I can no longer replicate the issue at all. Everything looks good now, and I'm not sure what the problem would have been. We've touched a lot of different things in the meantime and I can't narrow sown what might have changed. Good to close the ticket.
Closing this issue as suggested.
Setup Details
CIS Version : 2.7.1
Build: f5networks/k8s-bigip-ctlr:2.7.1
BIGIP Version: BIG-IP 14.1.4.5 Build 0.0.7 Point Release 5
AS3 Version: 3.x
Agent Mode: AS3
Orchestration: K8S Orchestration Version: v1.23.5
Pool Mode: Nodeport
Additional Setup details: Charmed Kubernetes, Canal
Description
When trying to set up a VirtualServer via CRD, we're only getting a half-complete setup. This seems to be something specific to VirtualServers, as setting up a TransportServer CRD works entirely correctly.
Steps To Reproduce
1) Apply manifest for VirtualServer (see below) 2) Check F5 web ui
Expected Result
Fully configured virtual server.
Actual Result
The configuration is only partially complete:
If run when a TransportServer is already set up, then we do have the nodes created from it, but still the VirtualServer's pool is empty.
Diagnostic Information
Manifest for virtual server:
Service exists:
Logs from CIS pod:
Observations (if any)