Closed arasyor closed 6 months ago
@arasyor Can you please share the Ingress Controller logs during this activity?
nsic-netscaler-ingress-controller-3-48wnb-nsic.log.zip
Hi,
I uploaded the log file. Logs contain the activity below.
Log outputs may be different than command outputs above because I've created fresh in order to simplify the logs. But configurations are same.
Can you also please share your service yaml for svc python-test-app?
You can find service yaml below.
apiVersion: v1
kind: Service
metadata:
creationTimestamp: "2024-02-09T15:18:26Z"
name: python-test-app
namespace: opensol-dev-infra
resourceVersion: "2061657141"
uid: 7b0b8c02-4362-467d-bba4-c0423d37ce7f
spec:
clusterIP: 10.90.221.159
clusterIPs:
- 10.90.221.159
internalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- port: 8080
protocol: TCP
targetPort: 8080
selector:
app: python-test-app
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
While checking the logs, I can see that this is not the right version as mentioned:
User has accepted EULA. Starting Triton
The default port for Citrix ingress controller to communicate with Citrix ADC has been changed from 80 to 443 and the protocol has been changed from HTTP to HTTPS
Citrix Ingress Controller version: 1.37.6, build: Fri 17 Nov 17:16:51 UTC 2023
Looks like you are using the operator and the version 1.39.6's operator is still not released. Can you please try install with helm charts?
Hi,
I installed controller with helm but nothing changed, problem still exists. You can find logs as attacment.
helm-nsic-netscaler-ingress-controller-3-6h2gk-nsic.log.zip
I've executed the helm command below for installation.
helm upgrade --install nsic netscaler/netscaler-ingress-controller \
--namespace netscaler-ingress-controller \
--create-namespace \
--set adcCredentialSecret=nslogin-local \
--set clusterName=ocptstinf01 \
--set crds.install=true \
--set crds.retainOnDelete=false \
--set defaultSSLCertSecret=nsic-tst-cert \
--set entityPrefix=openshift \
--set ingressClass[0]=netscaler \
--set license.accept=yes \
--set nodeSelector.key=node-role.kubernetes.io/infra \
--set nodeSelector.value="" \
--set nodeWatch=true \
--set nsIP=10.81.22.10 \
--set nsSNIPS='["10.79.94.56"]' \
--set nsVIP=10.79.94.55 \
--set openshift=true \
--set optimizeEndpointBinding=true \
--set routeLabels="netscaler-ingress-controller=true" \
--set tolerations[0].effect=NoSchedule \
--set tolerations[0].key=node-role.kubernetes.io/infra \
--set tolerations[0].operator=Exists \
--set tolerations[0].value=""
$ helm list
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
nsic netscaler-ingress-controller 3 2024-02-27 20:41:37.703355013 +0300 +03 deployed netscaler-ingress-controller-1.39.6 1.39.6
Can you please share the deployment yaml of the app: python-test-app? Can you also share the customer name?
Hi,
The deployment yaml of the python-test-app below.
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "19"
creationTimestamp: "2024-02-09T15:17:56Z"
generation: 35
name: python-test-app
namespace: opensol-dev-infra
resourceVersion: "2120156661"
uid: 5d651e46-1cef-4ce0-8a3a-d9226991580a
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: python-test-app
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
annotations:
kubectl.kubernetes.io/restartedAt: "2024-02-27T20:48:28+03:00"
creationTimestamp: null
labels:
app: python-test-app
spec:
containers:
- image: repo.finansbank.com.tr/infra-docker/python-test-app:latest
imagePullPolicy: Always
name: python-test-app
ports:
- containerPort: 8080
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
The customer name is QNB Finansbank.
Hi,
We would like to have a discussion on this issue. Please share your mail id so that we can setup a meet.
BR!
Hi,
You can reach me from aras.yorganci@ibtech.com.tr.
Regards.
hi @arasyor We have fixed this issue in version 1.40.12 https://github.com/netscaler/netscaler-k8s-ingress-controller/releases/tag/1.40.12
Service Group IP's not updating with new pod IPs when application rolled out on k8s ingress controller installed OpenShift cluster with NS_SNIPS option.
OpenShift version 4.12.34 with OpenShift SDN CNI.
netscaler-k8s-ingress-controller operator object below. Ingress Controller image tag 1.39.6 is used.
OpenShift route object below.
Netscaler LB vserver after route object created below.
Pod IPs of deployment below.
After deployment rolled out, new pods are created with new IPs but Service Group IPs not changed and vserver down.
On previous images with tag 1.33.x, when image name was Citrix Ingress Controller, this feature was working but it is not working now with Netscaler Ingress Controller.