apache / pinot

Apache Pinot - A realtime distributed OLAP datastore
https://pinot.apache.org/
Apache License 2.0
5.17k stars 1.21k forks source link

[HELM]: Added checksum config annotation in stateful set for broker, controller and server #13059

Closed abhioncbr closed 3 days ago

abhioncbr commented 2 weeks ago

As per the issue,

Added checksum/config annotation in broker, server and controller stateful set. Mre information about the annotation can be found here

Also, here is the output for the helm lint command

$ helm lint
==> Linting .
[INFO] Chart.yaml: icon is recommended

1 chart(s) linted, 0 chart(s) failed

And here is the output of the helm template command

helm template ```yaml --- # Source: pinot/templates/serviceaccount.yaml apiVersion: v1 kind: ServiceAccount metadata: name: release-name-pinot labels: helm.sh/chart: pinot-0.2.9-SNAPSHOT app: pinot release: release-name app.kubernetes.io/version: "1.0.0" app.kubernetes.io/managed-by: Helm heritage: Helm --- apiVersion: v1 kind: ConfigMap metadata: name: release-name-pinot-broker-config data: pinot-broker.conf: |- pinot.broker.client.queryPort=8099 pinot.broker.routing.table.builder.class=random pinot.set.instance.id.to.hostname=true pinot.query.server.port=7321 pinot.query.runner.port=7732 --- apiVersion: v1 kind: ConfigMap metadata: name: release-name-pinot-controller-config data: pinot-controller.conf: |- controller.helix.cluster.name=pinot-quickstart controller.port=9000 controller.data.dir=/var/pinot/controller/data controller.zk.str=release-name-zookeeper:2181 pinot.set.instance.id.to.hostname=true controller.task.scheduler.enabled=true --- # Source: pinot/templates/minion-stateless/configmap.yaml apiVersion: v1 kind: ConfigMap metadata: name: release-name-pinot-minion-stateless-config data: pinot-minion-stateless.conf: |- pinot.minion.port=9514 dataDir=/var/pinot/minion/data pinot.set.instance.id.to.hostname=true --- # Source: pinot/templates/server/configmap.yaml apiVersion: v1 kind: ConfigMap metadata: name: release-name-pinot-server-config data: pinot-server.conf: |- pinot.server.netty.port=8098 pinot.server.adminapi.port=8097 pinot.server.instance.dataDir=/var/pinot/server/data/index pinot.server.instance.segmentTarDir=/var/pinot/server/data/segment pinot.set.instance.id.to.hostname=true pinot.server.instance.realtime.alloc.offheap=true pinot.query.server.port=7321 pinot.query.runner.port=7732 --- # Source: pinot/charts/zookeeper/templates/svc-headless.yaml apiVersion: v1 kind: Service metadata: name: release-name-zookeeper-headless namespace: consumer labels: app.kubernetes.io/name: zookeeper helm.sh/chart: zookeeper-7.0.0 app.kubernetes.io/instance: release-name app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: zookeeper spec: type: ClusterIP clusterIP: None publishNotReadyAddresses: true ports: - name: tcp-client port: 2181 targetPort: client - name: follower port: 2888 targetPort: follower - name: tcp-election port: 3888 targetPort: election selector: app.kubernetes.io/name: zookeeper app.kubernetes.io/instance: release-name app.kubernetes.io/component: zookeeper --- # Source: pinot/charts/zookeeper/templates/svc.yaml apiVersion: v1 kind: Service metadata: name: release-name-zookeeper namespace: consumer labels: app.kubernetes.io/name: zookeeper helm.sh/chart: zookeeper-7.0.0 app.kubernetes.io/instance: release-name app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: zookeeper spec: type: ClusterIP ports: - name: tcp-client port: 2181 targetPort: client nodePort: null - name: follower port: 2888 targetPort: follower - name: tcp-election port: 3888 targetPort: election selector: app.kubernetes.io/name: zookeeper app.kubernetes.io/instance: release-name app.kubernetes.io/component: zookeeper --- # Source: pinot/templates/broker/service-external.yaml apiVersion: v1 kind: Service metadata: name: release-name-pinot-broker-external annotations: {} labels: helm.sh/chart: pinot-0.2.9-SNAPSHOT app: pinot release: release-name app.kubernetes.io/version: "1.0.0" app.kubernetes.io/managed-by: Helm heritage: Helm component: broker spec: type: LoadBalancer ports: - name: external-broker port: 8099 selector: app: pinot release: release-name component: broker --- # Source: pinot/templates/broker/service-headless.yaml apiVersion: v1 kind: Service metadata: name: release-name-pinot-broker-headless labels: helm.sh/chart: pinot-0.2.9-SNAPSHOT app: pinot release: release-name app.kubernetes.io/version: "1.0.0" app.kubernetes.io/managed-by: Helm heritage: Helm component: broker spec: clusterIP: None ports: # [pod_name].[service_name].[namespace].svc.cluster.local - name: broker port: 8099 selector: app: pinot release: release-name component: broker --- # Source: pinot/templates/broker/service.yaml apiVersion: v1 kind: Service metadata: name: release-name-pinot-broker annotations: {} labels: helm.sh/chart: pinot-0.2.9-SNAPSHOT app: pinot release: release-name app.kubernetes.io/version: "1.0.0" app.kubernetes.io/managed-by: Helm heritage: Helm component: broker spec: type: ClusterIP ports: # [pod_name].[service_name].[namespace].svc.cluster.local - name: broker port: 8099 selector: app: pinot release: release-name component: broker --- # Source: pinot/templates/controller/service-external.yaml apiVersion: v1 kind: Service metadata: name: release-name-pinot-controller-external annotations: {} labels: helm.sh/chart: pinot-0.2.9-SNAPSHOT app: pinot release: release-name app.kubernetes.io/version: "1.0.0" app.kubernetes.io/managed-by: Helm heritage: Helm component: controller spec: type: LoadBalancer ports: - name: external-controller port: 9000 selector: app: pinot release: release-name component: controller --- # Source: pinot/templates/controller/service-headless.yaml apiVersion: v1 kind: Service metadata: name: release-name-pinot-controller-headless labels: helm.sh/chart: pinot-0.2.9-SNAPSHOT app: pinot release: release-name app.kubernetes.io/version: "1.0.0" app.kubernetes.io/managed-by: Helm heritage: Helm component: controller spec: clusterIP: None ports: # [pod_name].[service_name].[namespace].svc.cluster.local - name: controller port: 9000 selector: app: pinot release: release-name component: controller --- # Source: pinot/templates/controller/service.yaml apiVersion: v1 kind: Service metadata: name: release-name-pinot-controller annotations: {} labels: helm.sh/chart: pinot-0.2.9-SNAPSHOT app: pinot release: release-name app.kubernetes.io/version: "1.0.0" app.kubernetes.io/managed-by: Helm heritage: Helm component: controller spec: type: ClusterIP ports: # [pod_name].[service_name].[namespace].svc.cluster.local - name: controller port: 9000 selector: app: pinot release: release-name component: controller --- # Source: pinot/templates/server/service-headless.yaml apiVersion: v1 kind: Service metadata: name: release-name-pinot-server-headless labels: helm.sh/chart: pinot-0.2.9-SNAPSHOT app: pinot release: release-name app.kubernetes.io/version: "1.0.0" app.kubernetes.io/managed-by: Helm heritage: Helm component: server spec: clusterIP: None ports: # [pod_name].[service_name].[namespace].svc.cluster.local - name: netty port: 8098 protocol: TCP - name: admin port: 80 targetPort: 8097 protocol: TCP selector: app: pinot release: release-name component: server --- # Source: pinot/templates/server/service.yaml apiVersion: v1 kind: Service metadata: name: release-name-pinot-server annotations: {} labels: helm.sh/chart: pinot-0.2.9-SNAPSHOT app: pinot release: release-name app.kubernetes.io/version: "1.0.0" app.kubernetes.io/managed-by: Helm heritage: Helm component: server spec: type: ClusterIP ports: # [pod_name].[service_name].[namespace].svc.cluster.local - name: netty port: 8098 protocol: TCP - name: admin port: 80 targetPort: 8097 protocol: TCP selector: app: pinot release: release-name component: server --- # Source: pinot/templates/minion-stateless/deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: release-name-pinot-minion-stateless labels: helm.sh/chart: pinot-0.2.9-SNAPSHOT app: pinot release: release-name app.kubernetes.io/version: "1.0.0" app.kubernetes.io/managed-by: Helm heritage: Helm component: minion-stateless spec: selector: matchLabels: app: pinot release: release-name component: minion-stateless replicas: 1 template: metadata: labels: helm.sh/chart: pinot-0.2.9-SNAPSHOT app: pinot release: release-name app.kubernetes.io/version: "1.0.0" app.kubernetes.io/managed-by: Helm heritage: Helm component: minion-stateless annotations: {} spec: terminationGracePeriodSeconds: 30 serviceAccountName: release-name-pinot securityContext: {} nodeSelector: {} affinity: {} tolerations: [] containers: - name: minion-stateless securityContext: {} image: "apachepinot/pinot:latest" imagePullPolicy: Always args: [ "StartMinion", "-clusterName", "pinot-quickstart", "-zkAddress", "release-name-zookeeper:2181", "-configFileName", "/var/pinot/minion/config/pinot-minion-stateless.conf" ] env: - name: JAVA_OPTS value: "-XX:ActiveProcessorCount=2 -Xms256M -Xmx1G -XX:+UseG1GC -XX:MaxGCPauseMillis=200 -Xlog:gc*:file=/opt/pinot/gc-pinot-minion.log -Dlog4j2.configurationFile=/opt/pinot/etc/conf/pinot-minion-log4j2.xml -Dplugins.dir=/opt/pinot/plugins" - name: LOG4J_CONSOLE_LEVEL value: info envFrom: [] ports: - containerPort: 9514 protocol: TCP name: minion livenessProbe: initialDelaySeconds: 60 periodSeconds: 10 httpGet: path: /health port: 9514 readinessProbe: initialDelaySeconds: 60 periodSeconds: 10 httpGet: path: /health port: 9514 volumeMounts: - name: config mountPath: /var/pinot/minion/config resources: requests: memory: 1.25Gi restartPolicy: Always volumes: - name: config configMap: name: release-name-pinot-minion-stateless-config - name: data emptyDir: {} --- # Source: pinot/charts/zookeeper/templates/statefulset.yaml apiVersion: apps/v1 kind: StatefulSet metadata: name: release-name-zookeeper namespace: consumer labels: app.kubernetes.io/name: zookeeper helm.sh/chart: zookeeper-7.0.0 app.kubernetes.io/instance: release-name app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: zookeeper role: zookeeper spec: serviceName: release-name-zookeeper-headless replicas: 1 podManagementPolicy: Parallel updateStrategy: type: RollingUpdate selector: matchLabels: app.kubernetes.io/name: zookeeper app.kubernetes.io/instance: release-name app.kubernetes.io/component: zookeeper template: metadata: name: release-name-zookeeper labels: app.kubernetes.io/name: zookeeper helm.sh/chart: zookeeper-7.0.0 app.kubernetes.io/instance: release-name app.kubernetes.io/managed-by: Helm app.kubernetes.io/component: zookeeper spec: serviceAccountName: default securityContext: fsGroup: 1001 affinity: podAffinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - podAffinityTerm: labelSelector: matchLabels: app.kubernetes.io/name: zookeeper app.kubernetes.io/instance: release-name app.kubernetes.io/component: zookeeper namespaces: - "consumer" topologyKey: kubernetes.io/hostname weight: 1 nodeAffinity: containers: - name: zookeeper image: docker.io/bitnami/zookeeper:3.7.0-debian-10-r56 imagePullPolicy: "IfNotPresent" securityContext: runAsUser: 1001 command: - bash - -ec - | # Execute entrypoint as usual after obtaining ZOO_SERVER_ID # check ZOO_SERVER_ID in persistent volume via myid # if not present, set based on POD hostname if [[ -f "/bitnami/zookeeper/data/myid" ]]; then export ZOO_SERVER_ID="$(cat /bitnami/zookeeper/data/myid)" else HOSTNAME=`hostname -s` if [[ $HOSTNAME =~ (.*)-([0-9]+)$ ]]; then ORD=${BASH_REMATCH[2]} export ZOO_SERVER_ID=$((ORD + 1 )) else echo "Failed to get index from hostname $HOST" exit 1 fi fi exec /entrypoint.sh /run.sh resources: requests: cpu: 250m memory: 1.25Gi env: - name: ZOO_DATA_LOG_DIR value: "" - name: ZOO_PORT_NUMBER value: "2181" - name: ZOO_TICK_TIME value: "2000" - name: ZOO_INIT_LIMIT value: "10" - name: ZOO_SYNC_LIMIT value: "5" - name: ZOO_MAX_CLIENT_CNXNS value: "60" - name: ZOO_4LW_COMMANDS_WHITELIST value: "srvr, mntr, ruok" - name: ZOO_LISTEN_ALLIPS_ENABLED value: "no" - name: ZOO_AUTOPURGE_INTERVAL value: "1" - name: ZOO_AUTOPURGE_RETAIN_COUNT value: "5" - name: ZOO_MAX_SESSION_TIMEOUT value: "40000" - name: ZOO_SERVERS value: release-name-zookeeper-0.release-name-zookeeper-headless.consumer.svc.cluster.local:2888:3888::1 - name: ZOO_ENABLE_AUTH value: "no" - name: ZOO_HEAP_SIZE value: "1024" - name: ZOO_LOG_LEVEL value: "ERROR" - name: ALLOW_ANONYMOUS_LOGIN value: "yes" - name: POD_NAME valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.name ports: - name: client containerPort: 2181 - name: follower containerPort: 2888 - name: election containerPort: 3888 livenessProbe: exec: command: ['/bin/bash', '-c', 'echo "ruok" | timeout 2 nc -w 2 localhost 2181 | grep imok'] initialDelaySeconds: 30 periodSeconds: 10 timeoutSeconds: 5 successThreshold: 1 failureThreshold: 6 readinessProbe: exec: command: ['/bin/bash', '-c', 'echo "ruok" | timeout 2 nc -w 2 localhost 2181 | grep imok'] initialDelaySeconds: 5 periodSeconds: 10 timeoutSeconds: 5 successThreshold: 1 failureThreshold: 6 volumeMounts: - name: data mountPath: /bitnami/zookeeper volumes: volumeClaimTemplates: - metadata: name: data annotations: spec: accessModes: - "ReadWriteOnce" resources: requests: storage: "8Gi" --- # Source: pinot/templates/broker/statefulset.yaml apiVersion: apps/v1 kind: StatefulSet metadata: name: release-name-pinot-broker labels: helm.sh/chart: pinot-0.2.9-SNAPSHOT app: pinot release: release-name app.kubernetes.io/version: "1.0.0" app.kubernetes.io/managed-by: Helm heritage: Helm component: broker spec: selector: matchLabels: app: pinot release: release-name component: broker serviceName: release-name-pinot-broker-headless replicas: 1 updateStrategy: type: RollingUpdate podManagementPolicy: Parallel template: metadata: labels: helm.sh/chart: pinot-0.2.9-SNAPSHOT app: pinot release: release-name app.kubernetes.io/version: "1.0.0" app.kubernetes.io/managed-by: Helm heritage: Helm component: broker annotations: checksum/config:b6426af6821d74c336050babe831e53b07c558ec0609fdfea5bf196a5f8ffd7e {} spec: terminationGracePeriodSeconds: 30 serviceAccountName: release-name-pinot securityContext: {} nodeSelector: {} affinity: {} tolerations: [] containers: - name: broker securityContext: {} image: "apachepinot/pinot:latest" imagePullPolicy: Always args: [ "StartBroker", "-clusterName", "pinot-quickstart", "-zkAddress", "release-name-zookeeper:2181", "-configFileName", "/var/pinot/broker/config/pinot-broker.conf" ] env: - name: JAVA_OPTS value: "-XX:ActiveProcessorCount=2 -Xms256M -Xmx1G -XX:+UseG1GC -XX:MaxGCPauseMillis=200 -Xlog:gc*:file=/opt/pinot/gc-pinot-broker.log -Dlog4j2.configurationFile=/opt/pinot/etc/conf/pinot-broker-log4j2.xml -Dplugins.dir=/opt/pinot/plugins" - name: LOG4J_CONSOLE_LEVEL value: info envFrom: [] ports: - containerPort: 8099 protocol: TCP name: broker volumeMounts: - name: config mountPath: /var/pinot/broker/config livenessProbe: initialDelaySeconds: 60 periodSeconds: 10 httpGet: path: /health port: 8099 readinessProbe: initialDelaySeconds: 60 periodSeconds: 10 httpGet: path: /health port: 8099 resources: requests: memory: 1.25Gi restartPolicy: Always volumes: - name: config configMap: name: release-name-pinot-broker-config --- # Source: pinot/templates/controller/statefulset.yaml apiVersion: apps/v1 kind: StatefulSet metadata: name: release-name-pinot-controller labels: helm.sh/chart: pinot-0.2.9-SNAPSHOT app: pinot release: release-name app.kubernetes.io/version: "1.0.0" app.kubernetes.io/managed-by: Helm heritage: Helm component: controller spec: selector: matchLabels: app: pinot release: release-name component: controller serviceName: release-name-pinot-controller-headless replicas: 1 updateStrategy: type: RollingUpdate podManagementPolicy: Parallel template: metadata: labels: helm.sh/chart: pinot-0.2.9-SNAPSHOT app: pinot release: release-name app.kubernetes.io/version: "1.0.0" app.kubernetes.io/managed-by: Helm heritage: Helm component: controller annotations: checksum/config:ee3073abb448053a09d52ba09889069d1c28d2d9626bb86c7c3e5f8f75a9736f {} spec: terminationGracePeriodSeconds: 30 serviceAccountName: release-name-pinot securityContext: {} nodeSelector: {} affinity: {} tolerations: [] containers: - name: controller securityContext: {} image: "apachepinot/pinot:latest" imagePullPolicy: Always args: [ "StartController", "-configFileName", "/var/pinot/controller/config/pinot-controller.conf" ] env: - name: JAVA_OPTS value: "-XX:ActiveProcessorCount=2 -Xms256M -Xmx1G -XX:+UseG1GC -XX:MaxGCPauseMillis=200 -Xlog:gc*:file=/opt/pinot/gc-pinot-controller.log -Dlog4j2.configurationFile=/opt/pinot/etc/conf/pinot-controller-log4j2.xml -Dplugins.dir=/opt/pinot/plugins" - name: LOG4J_CONSOLE_LEVEL value: info envFrom: [] ports: - containerPort: 9000 protocol: TCP name: controller volumeMounts: - name: config mountPath: /var/pinot/controller/config - name: data mountPath: "/var/pinot/controller/data" resources: requests: memory: 1.25Gi restartPolicy: Always volumes: - name: config configMap: name: release-name-pinot-controller-config volumeClaimTemplates: - metadata: name: data spec: accessModes: - "ReadWriteOnce" resources: requests: storage: "1G" --- # Source: pinot/templates/server/statefulset.yaml apiVersion: apps/v1 kind: StatefulSet metadata: name: release-name-pinot-server labels: helm.sh/chart: pinot-0.2.9-SNAPSHOT app: pinot release: release-name app.kubernetes.io/version: "1.0.0" app.kubernetes.io/managed-by: Helm heritage: Helm component: server spec: selector: matchLabels: app: pinot release: release-name component: server serviceName: release-name-pinot-server-headless replicas: 1 updateStrategy: type: RollingUpdate podManagementPolicy: Parallel template: metadata: labels: helm.sh/chart: pinot-0.2.9-SNAPSHOT app: pinot release: release-name app.kubernetes.io/version: "1.0.0" app.kubernetes.io/managed-by: Helm heritage: Helm component: server annotations: checksum/config:9afbdb5bf6c23556934cfe4f46d3916d6203064fe3d62854815f2426ef43c6c3 {} spec: terminationGracePeriodSeconds: 30 serviceAccountName: release-name-pinot securityContext: {} nodeSelector: {} affinity: {} tolerations: [] containers: - name: server securityContext: {} image: "apachepinot/pinot:latest" imagePullPolicy: Always args: [ "StartServer", "-clusterName", "pinot-quickstart", "-zkAddress", "release-name-zookeeper:2181", "-configFileName", "/var/pinot/server/config/pinot-server.conf" ] env: - name: JAVA_OPTS value: "-Xms512M -Xmx1G -XX:+UseG1GC -XX:MaxGCPauseMillis=200 -Xlog:gc*:file=/opt/pinot/gc-pinot-server.log -Dlog4j2.configurationFile=/opt/pinot/etc/conf/pinot-server-log4j2.xml -Dplugins.dir=/opt/pinot/plugins" - name: LOG4J_CONSOLE_LEVEL value: info envFrom: [] ports: - containerPort: 8098 protocol: TCP name: netty - containerPort: 8097 protocol: TCP name: admin volumeMounts: - name: config mountPath: /var/pinot/server/config - name: data mountPath: "/var/pinot/server/data" resources: requests: memory: 1.25Gi restartPolicy: Always volumes: - name: config configMap: name: release-name-pinot-server-config volumeClaimTemplates: - metadata: name: data spec: accessModes: - "ReadWriteOnce" resources: requests: storage: 4G --- # Source: pinot/templates/broker/service-external.yaml --- # Source: pinot/templates/controller/service-external.yaml --- # Source: pinot/templates/minion-stateless/pvc.yaml --- # Source: pinot/templates/minion/configmap.yaml --- # Source: pinot/templates/minion/service-headless.yaml --- # Source: pinot/templates/minion/service.yaml --- # Source: pinot/templates/minion/statefulset.yaml ```
codecov-commenter commented 2 weeks ago

Codecov Report

All modified and coverable lines are covered by tests :white_check_mark:

Project coverage is 62.11%. Comparing base (59551e4) to head (51c38c0). Report is 446 commits behind head on master.

Additional details and impacted files ```diff @@ Coverage Diff @@ ## master #13059 +/- ## ============================================ + Coverage 61.75% 62.11% +0.36% + Complexity 207 198 -9 ============================================ Files 2436 2515 +79 Lines 133233 137862 +4629 Branches 20636 21335 +699 ============================================ + Hits 82274 85635 +3361 - Misses 44911 45833 +922 - Partials 6048 6394 +346 ``` | [Flag](https://app.codecov.io/gh/apache/pinot/pull/13059/flags?src=pr&el=flags&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=apache) | Coverage Δ | | |---|---|---| | [custom-integration1](https://app.codecov.io/gh/apache/pinot/pull/13059/flags?src=pr&el=flag&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=apache) | `<0.01% <ø> (-0.01%)` | :arrow_down: | | [integration](https://app.codecov.io/gh/apache/pinot/pull/13059/flags?src=pr&el=flag&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=apache) | `<0.01% <ø> (-0.01%)` | :arrow_down: | | [integration1](https://app.codecov.io/gh/apache/pinot/pull/13059/flags?src=pr&el=flag&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=apache) | `<0.01% <ø> (-0.01%)` | :arrow_down: | | [integration2](https://app.codecov.io/gh/apache/pinot/pull/13059/flags?src=pr&el=flag&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=apache) | `?` | | | [java-11](https://app.codecov.io/gh/apache/pinot/pull/13059/flags?src=pr&el=flag&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=apache) | `62.11% <ø> (+0.40%)` | :arrow_up: | | [java-21](https://app.codecov.io/gh/apache/pinot/pull/13059/flags?src=pr&el=flag&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=apache) | `<0.01% <ø> (-61.63%)` | :arrow_down: | | [skip-bytebuffers-false](https://app.codecov.io/gh/apache/pinot/pull/13059/flags?src=pr&el=flag&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=apache) | `62.11% <ø> (+0.36%)` | :arrow_up: | | [skip-bytebuffers-true](https://app.codecov.io/gh/apache/pinot/pull/13059/flags?src=pr&el=flag&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=apache) | `<0.01% <ø> (-27.73%)` | :arrow_down: | | [temurin](https://app.codecov.io/gh/apache/pinot/pull/13059/flags?src=pr&el=flag&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=apache) | `62.11% <ø> (+0.36%)` | :arrow_up: | | [unittests](https://app.codecov.io/gh/apache/pinot/pull/13059/flags?src=pr&el=flag&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=apache) | `62.11% <ø> (+0.36%)` | :arrow_up: | | [unittests1](https://app.codecov.io/gh/apache/pinot/pull/13059/flags?src=pr&el=flag&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=apache) | `46.68% <ø> (-0.21%)` | :arrow_down: | | [unittests2](https://app.codecov.io/gh/apache/pinot/pull/13059/flags?src=pr&el=flag&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=apache) | `27.78% <ø> (+0.05%)` | :arrow_up: | Flags with carried forward coverage won't be shown. [Click here](https://docs.codecov.io/docs/carryforward-flags?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=apache#carryforward-flags-in-the-pull-request-comment) to find out more.

:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.

xiangfu0 commented 3 days ago

Please also this for minion statefulSet as well: https://github.com/apache/pinot/blob/master/helm/pinot/templates/minion/statefulset.yaml