zilliztech / milvus-helm

Apache License 2.0
54 stars 39 forks source link

line 48: mapping key "httpNumThreads" already defined at line 36 #58

Open drawnwren opened 6 months ago

drawnwren commented 6 months ago

When using 4.1.17 w/ flux I get the following error:

Helm install failed for release milvus/milvus-milvus with chart milvus@4.1.17: error while running post render on files: map[string]interface {}(nil): yaml: unmarshal errors:  
                                          line 48: mapping key "httpNumThreads" already defined at line 36

Here are the values I'm passing

---
apiVersion: helm.toolkit.fluxcd.io/v2beta2
kind: HelmRelease
metadata:
  name: milvus
  namespace: milvus
spec:
  targetNamespace: milvus
  interval: 1m
  chart:
    spec:
      chart: milvus
      version: "4.1.17"
      sourceRef:
        kind: HelmRepository
        name: milvus
        namespace: milvus
      interval: 1m
  values:
    cluster:
      enabled: true
    serviceAccount:
      create: true
      name: milvus-s3-access-sa
      annotations: 
        eks.amazonaws.com/role-arn: "my-s3-arn"
    service:
      type: LoadBalancer
      port: 19530
      annotations: 
        service.beta.kubernetes.io/aws-load-balancer-type: external
        service.beta.kubernetes.io/aws-load-balancer-name: milvus-service
        service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing
        service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: ip
    minio:
      enabled: false
    externalS3:
      enabled: true
      host: "s3.us-east-2.amazonaws.com"
      port: "443"
      useSSL: true
      bucketName: "milvusbucket"
      useIAM: true
      cloudProvider: "aws"
      iamEndpoint: ""
    rootCoordinator:
      replicas: 2
      activeStandby:
        enabled: true
      resources: 
        limits:
          cpu: 1
          memory: 2Gi
    indexCoordinator:
      replicas: 2
      activeStandby:
        enabled: true
      resources: 
        limits:
          cpu: "0.5"
          memory: 0.5Gi
    queryCoordinator:
      replicas: 2
      activeStandby:
        enabled: true
      resources: 
        limits:
          cpu: "0.5"
          memory: 0.5Gi
    dataCoordinator:
      replicas: 2
      activeStandby:
        enabled: true
      resources: 
        limits:
          cpu: "0.5"
          memory: 0.5Gi
    proxy:
      replicas: 2
      resources: 
        limits:
          cpu: 1
          memory: 2Gi  
    settings:
      clusterName: "basis"
      clusterEndpoint: "myclusterendpoint"
    logLevel: info
  install:
    crds: CreateReplace
  upgrade:
    crds: CreateReplace
drawnwren commented 6 months ago

Swapping pulsar for kafka fixes the problem, but that doesn't seem like a complete solution.

haorenfsa commented 6 months ago

looks like a bug

haorenfsa commented 6 months ago

em... I can't reproduce it by using helm directly. Seems to me a bug of https://fluxcd.io/

haney-oliver commented 5 months ago

I have the same exact issue using FluxCD v2. I can follow up with more details if required. Going to attempt to use a postRenderer to resolve

haney-oliver commented 5 months ago

After rendering the template locally, I found the offending resource: helm template miluvs zilliztech/milvus

Note: I'm not using any custom values

Output Snippet:

apiVersion: v1
kind: ConfigMap
metadata:
  name: "milvus-pulsar-proxy"
  namespace: default
  labels:
    app: pulsar
    chart: pulsar-2.7.8
    release: milvus
    heritage: Helm
    cluster: milvus-pulsar
    component: proxy
data:
  clusterName: milvus-pulsar
  httpNumThreads: "8"
  statusFilePath: "/pulsar/status"
  # prometheus needs to access /metrics endpoint
  webServicePort: "80"
  servicePort: "6650"
  brokerServiceURL: pulsar://milvus-pulsar-broker:6650
  brokerWebServiceURL: http://milvus-pulsar-broker:8080

  # Authentication Settings
  PULSAR_GC: |
    -XX:MaxDirectMemorySize=2048m
  PULSAR_MEM: |
    -Xms2048m -Xmx2048m
  httpNumThreads: "100"

This is due to a hardcoded value here: https://github.com/apache/pulsar-helm-chart/blob/pulsar-2.7.8/charts/pulsar/templates/proxy-configmap.yaml#L31C1-L31C22

Later versions of the pulsar helm chart does not include this hardcoded value.

I was unable to resolve with postrenderers, but using flat yaml via flux will probably work (if you update the rendered manifest by removing the duplicate key).