I am running into issues where I cannot configure the ingress to access the CVAT front/back-end. I have followed the documentation for CVAT deployment on k8s using helm. Here are some of the logs in the process of configuring the cluster.
helm upgrade -n cvat test -i --create-namespace ./helm-chart -f ./helm-chart/values.yaml -f ./helm-chart/values.override.yaml
Release "test" does not exist. Installing it now.
walk.go:74: found symbolic link in path: /.../helm-chart/analytics resolves to /.../components/analytics. Contents of linked file included and used
NAME: test
LAST DEPLOYED: Tue Jul 2 13:50:33 2024
NAMESPACE: cvat
STATUS: deployed
REVISION: 1
I am expecting a response from the cvat.local DNS upon sending requests through ping or curl.
Possible Solution
No response
Context
This is the values.yaml file I used for context:
traefik:
enabled: true
service:
externalIPs:
- "192.168.49.2" #add minikube ip when testing locally.
ingress:
enabled: true
The values.yaml file looks like this:
# Default values for cvat.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
imagePullSecrets: []
nameOverride: ""
fullnameOverride: ""
cvat:
backend:
labels: {}
annotations: {}
resources: {}
affinity: {}
tolerations: []
additionalEnv: []
additionalVolumes: []
additionalVolumeMounts: []
# -- The service account the backend pods will use to interact with the Kubernetes API
serviceAccount:
name: default
initializer:
labels: {}
annotations: {}
resources: {}
affinity: {}
tolerations: []
additionalEnv: []
additionalVolumes: []
additionalVolumeMounts: []
server:
replicas: 1
labels: {}
annotations: {}
resources: {}
affinity: {}
tolerations: []
envs:
ALLOWED_HOSTS: "*"
additionalEnv: []
additionalVolumes: []
additionalVolumeMounts: []
worker:
export:
replicas: 2
labels: {}
annotations: {}
resources: {}
affinity: {}
tolerations: []
additionalEnv: []
additionalVolumes: []
additionalVolumeMounts: []
import:
replicas: 2
labels: {}
annotations: {}
resources: {}
affinity: {}
tolerations: []
additionalEnv: []
additionalVolumes: []
additionalVolumeMounts: []
annotation:
replicas: 1
labels: {}
annotations: {}
resources: {}
affinity: {}
tolerations: []
additionalEnv: []
additionalVolumes: []
additionalVolumeMounts: []
webhooks:
replicas: 1
labels: {}
annotations: {}
resources: {}
affinity: {}
tolerations: []
additionalEnv: []
additionalVolumes: []
additionalVolumeMounts: []
qualityreports:
replicas: 1
labels: {}
annotations: {}
resources: {}
affinity: {}
tolerations: []
additionalEnv: []
additionalVolumes: []
additionalVolumeMounts: []
analyticsreports:
replicas: 1
labels: {}
annotations: {}
resources: {}
affinity: {}
tolerations: []
additionalEnv: []
additionalVolumes: []
additionalVolumeMounts: []
utils:
replicas: 1
labels: {}
annotations: {}
resources: {}
affinity: {}
tolerations: []
additionalEnv: []
additionalVolumes: []
additionalVolumeMounts: []
replicas: 1
image: cvat/server
tag: dev
imagePullPolicy: Always
permissionFix:
enabled: true
service:
annotations:
traefik.ingress.kubernetes.io/service.sticky.cookie: "true"
spec:
type: ClusterIP
ports:
- port: 8080
targetPort: 8080
protocol: TCP
name: http
defaultStorage:
enabled: true
# storageClassName: default
# accessModes:
# - ReadWriteMany
size: 20Gi
disableDistinctCachePerService: false
frontend:
replicas: 1
image: cvat/ui
tag: dev
imagePullPolicy: Always
labels: {}
# test: test
annotations: {}
# test.io/test: test
resources: {}
affinity: {}
tolerations: []
# nodeAffinity:
# requiredDuringSchedulingIgnoredDuringExecution:
# nodeSelectorTerms:
# - matchExpressions:
# - key: kubernetes.io/e2e-az-name
# operator: In
# values:
# - e2e-az1
# - e2e-az2
additionalEnv: []
# Example:
# - name: volume-from-secret
# - name: TEST
# value: "test"
additionalVolumes: []
# Example(assumes that pvc was already created):
# - name: tmp
# persistentVolumeClaim:
# claimName: tmp
additionalVolumeMounts: []
# Example:
# - mountPath: /tmp
# name: tmp
# subPath: test
service:
type: ClusterIP
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http
opa:
replicas: 1
image: openpolicyagent/opa
tag: 0.63.0
imagePullPolicy: IfNotPresent
labels: {}
# test: test
annotations: {}
# test.io/test: test
resources: {}
affinity: {}
tolerations: []
# nodeAffinity:
# requiredDuringSchedulingIgnoredDuringExecution:
# nodeSelectorTerms:
# - matchExpressions:
# - key: kubernetes.io/e2e-az-name
# operator: In
# values:
# - e2e-az1
# - e2e-az2
additionalEnv: []
# Example:
# - name: volume-from-secret
# - name: TEST
# value: "test"
additionalVolumes: []
# Example(assumes that pvc was already created):
# - name: tmp
# persistentVolumeClaim:
# claimName: tmp
additionalVolumeMounts: []
# Example:
# - mountPath: /tmp
# name: tmp
# subPath: test
composeCompatibleServiceName: true # Sets service name to opa in order to be compatible with Docker Compose. Necessary because changing IAM_OPA_DATA_URL via environment variables in current images. Hinders multiple deployment due to duplicate name
service:
type: ClusterIP
ports:
- port: 8181
targetPort: 8181
protocol: TCP
name: http
kvrocks:
enabled: true
external:
host: kvrocks-external.localdomain
existingSecret: "cvat-kvrocks-secret"
secret:
create: true
name: cvat-kvrocks-secret
password: cvat_kvrocks
image: apache/kvrocks
tag: 2.7.0
imagePullPolicy: IfNotPresent
labels: {}
# test: test
annotations: {}
# test.io/test: test
resources: {}
affinity: {}
tolerations: []
nodeAffinity: {}
# requiredDuringSchedulingIgnoredDuringExecution:
# nodeSelectorTerms:
# - matchExpressions:
# - key: kubernetes.io/e2e-az-name
# operator: In
# values:
# - e2e-az1
# - e2e-az2
additionalEnv: []
# Example:
# - name: TEST
# value: "test"
additionalVolumes: []
# Example(assumes that pvc was already created):
# - name: tmp
# persistentVolumeClaim:
# claimName: tmp
additionalVolumeMounts: []
# Example:
# - mountPath: /tmp
# name: tmp
# subPath: test
defaultStorage:
enabled: true
# storageClassName: default
# accessModes:
# - ReadWriteOnce
size: 100Gi
postgresql:
#See https://github.com/bitnami/charts/blob/master/bitnami/postgresql/ for more info
enabled: true # false for external db
external:
# Ignored if an empty value is set
host: ""
# Ignored if an empty value is set
port: ""
# If not external following config will be applied by default
auth:
existingSecret: "{{ .Release.Name }}-postgres-secret"
username: cvat
database: cvat
service:
ports:
postgresql: 5432
secret:
create: true
name: "{{ .Release.Name }}-postgres-secret"
password: cvat_postgresql
postgres_password: cvat_postgresql_postgres
replication_password: cvat_postgresql_replica
# https://artifacthub.io/packages/helm/bitnami/redis
redis:
enabled: true
external:
host: 127.0.0.1
architecture: standalone
auth:
existingSecret: "cvat-redis-secret"
existingSecretPasswordKey: password
secret:
create: true
name: cvat-redis-secret
password: cvat_redis
# TODO: persistence options
nuclio:
enabled: false
# See https://github.com/nuclio/nuclio/blob/master/hack/k8s/helm/nuclio/values.yaml for more info
# registry:
# loginUrl: someurl
# credentials:
# username: someuser
# password: somepass
analytics:
# Set clickhouse.enabled to false if you disable analytics or use an external database
enabled: true
clickhouseDb: cvat
clickhouseUser: user
clickhousePassword: user
clickhouseHost: "{{ .Release.Name }}-clickhouse"
clickhousePort: 8123
vector:
envFrom:
- secretRef:
name: cvat-analytics-secret
existingConfigMaps:
- cvat-vector-config
dataDir: "/vector-data-dir"
containerPorts:
- name: http
containerPort: 80
protocol: TCP
service:
ports:
- name: http
port: 80
protocol: TCP
image:
tag: "0.26.0-alpine"
clickhouse:
# Set to false in case of external db usage
enabled: true
shards: 1
replicaCount: 1
extraEnvVarsSecret: cvat-analytics-secret
initdbScriptsSecret: cvat-clickhouse-init
auth:
username: user
existingSecret: cvat-analytics-secret
existingSecretKey: CLICKHOUSE_PASSWORD
# Consider enabling zookeeper if a distributed configuration is used
zookeeper:
enabled: false
grafana:
envFromSecret: cvat-analytics-secret
datasources:
datasources.yaml:
apiVersion: 1
datasources:
- name: 'ClickHouse'
type: 'grafana-clickhouse-datasource'
isDefault: true
jsonData:
defaultDatabase: ${CLICKHOUSE_DB}
port: ${CLICKHOUSE_PORT}
server: ${CLICKHOUSE_HOST}
username: ${CLICKHOUSE_USER}
tlsSkipVerify: false
protocol: http
secureJsonData:
password: ${CLICKHOUSE_PASSWORD}
editable: false
dashboardProviders:
dashboardproviders.yaml:
apiVersion: 1
providers:
- name: 'default'
orgId: 1
folder: ''
type: file
disableDeletion: false
editable: true
options:
path: /var/lib/grafana/dashboards
dashboardsConfigMaps:
default: "cvat-grafana-dashboards"
plugins:
- grafana-clickhouse-datasource
grafana.ini:
server:
root_url: https://cvat.local/analytics
dashboards:
default_home_dashboard_path: /var/lib/grafana/dashboards/default/all_events.json
users:
viewers_can_edit: true
auth:
disable_login_form: true
disable_signout_menu: true
auth.anonymous:
enabled: true
org_role: Admin
auth.basic:
enabled: false
ingress:
## @param ingress.enabled Enable ingress resource generation for CVAT
##
enabled: false
## @param ingress.hostname Host for the ingress resource
##
hostname: cvat.local
## @param ingress.annotations Additional annotations for the Ingress resource.
##
## e.g:
## annotations:
## kubernetes.io/ingress.class: nginx
##
annotations: {}
## @param ingress.className IngressClass that will be be used to implement the Ingress (Kubernetes 1.18+)
## This is supported in Kubernetes 1.18+ and required if you have more than one IngressClass marked as the default for your cluster
## ref: https://kubernetes.io/blog/2020/04/02/improvements-to-the-ingress-api-in-kubernetes-1.18/
##
className: ""
## @param ingress.tls Enable TLS configuration for the host defined at `ingress.hostname` parameter
## TLS certificates will be retrieved from a TLS secret defined in tlsSecretName parameter
##
tls: false
## @param ingress.tlsSecretName Specifies the name of the secret containing TLS certificates. Ignored if ingress.tls is false
##
tlsSecretName: ingress-tls-cvat
traefik:
enabled: false
logs:
general:
format: json
access:
enabled: true
format: json
fields:
general:
defaultmode: drop
names:
ClientHost: keep
DownstreamContentSize: keep
DownstreamStatus: keep
Duration: keep
RequestHost: keep
RequestMethod: keep
RequestPath: keep
RequestPort: keep
RequestProtocol: keep
RouterName: keep
StartUTC: keep
providers:
kubernetesIngress:
allowEmptyServices: true
smokescreen:
opts: ''
I have ensured that the external IP for the k8s cluster as well as the DNS is properly configured. Here are the logs:
minikube ip
$ 192.168.49.2
kubectl config current-context
$ minikube
# This file was automatically generated by WSL. To stop automatic generation of this file, add the following entry to /etc/wsl.conf:
# [network]
# generateHosts = false
127.0.0.1 localhost
127.0.1.1 mylaptop
# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
192.168.49.2 cvat.local
Actions before raising this issue
Steps to Reproduce
I am running into issues where I cannot configure the ingress to access the CVAT front/back-end. I have followed the documentation for CVAT deployment on k8s using helm. Here are some of the logs in the process of configuring the cluster.
Expected Behavior
I am expecting a response from the cvat.local DNS upon sending requests through ping or curl.
Possible Solution
No response
Context
This is the values.yaml file I used for context:
The values.yaml file looks like this:
I have ensured that the external IP for the k8s cluster as well as the DNS is properly configured. Here are the logs:
Environment