helm / chartmuseum

helm chart repository server
https://chartmuseum.com
Apache License 2.0
3.6k stars 401 forks source link

Multi charts using the same chartmuseum issue #354

Closed iamaverrick closed 4 years ago

iamaverrick commented 4 years ago

we are using chartmuseum in production server but our charts have grown substantially and are now causing conflicts due to naming convection which we are trying to resolve by creating multiple folder with in the aws S3 bucket but have failed. at the moment we have 2 chartmuseum one for web, and the other for api and we are trying to have to separate chartmuseum per folder.

the issue we are currently running into is with ingress because we are pointing to the same url i.g charts.company.com/api, charts.company.com/web and because we are running different services pointing to the same url one fails to resolve.

we have read about multi tenancy which we would specify nv.open.DEPTH = 2 which worksbut only with http://localhost:8080/org1/repoa http://localhost:8080/org2/repob

and as mentioned above we would like to use our own naming convention as described below

http://localhost:8080/api http://localhost:8080/web

Api Config


replicaCount: 1
strategy:
  type: RollingUpdate
  rollingUpdate:
    maxUnavailable: 0
image:
  repository: chartmuseum/chartmuseum
  tag: v0.12.0
  pullPolicy: IfNotPresent
env:
  open:
    # storage backend, can be one of: local, alibaba, amazon, google, microsoft, oracle
    STORAGE: amazon
    # oss bucket to store charts for alibaba storage backend
    STORAGE_ALIBABA_BUCKET:
    # prefix to store charts for alibaba storage backend
    STORAGE_ALIBABA_PREFIX:
    # oss endpoint to store charts for alibaba storage backend
    STORAGE_ALIBABA_ENDPOINT:
    # server side encryption algorithm for alibaba storage backend, can be one
    # of: AES256 or KMS
    STORAGE_ALIBABA_SSE:
    # s3 bucket to store charts for amazon storage backend
    STORAGE_AMAZON_BUCKET: charts.company.com
    # prefix to store charts for amazon storage backend
    STORAGE_AMAZON_PREFIX: api
    # region of s3 bucket to store charts
    STORAGE_AMAZON_REGION: us-east-1
    # alternative s3 endpoint
    STORAGE_AMAZON_ENDPOINT:
    # server side encryption algorithm
    STORAGE_AMAZON_SSE:
    # gcs bucket to store charts for google storage backend
    STORAGE_GOOGLE_BUCKET:
    # prefix to store charts for google storage backend
    STORAGE_GOOGLE_PREFIX:
    # container to store charts for microsoft storage backend
    STORAGE_MICROSOFT_CONTAINER:
    # prefix to store charts for microsoft storage backend
    STORAGE_MICROSOFT_PREFIX:
    # container to store charts for openstack storage backend
    STORAGE_OPENSTACK_CONTAINER:
    # prefix to store charts for openstack storage backend
    STORAGE_OPENSTACK_PREFIX:
    # region of openstack container
    STORAGE_OPENSTACK_REGION:
    # path to a CA cert bundle for your openstack endpoint
    STORAGE_OPENSTACK_CACERT:
    # compartment id for for oracle storage backend
    STORAGE_ORACLE_COMPARTMENTID:
    # oci bucket to store charts for oracle storage backend
    STORAGE_ORACLE_BUCKET:
    # prefix to store charts for oracle storage backend
    STORAGE_ORACLE_PREFIX:
    # form field which will be queried for the chart file content
    CHART_POST_FORM_FIELD_NAME: chart
    # form field which will be queried for the provenance file content
    PROV_POST_FORM_FIELD_NAME: prov
    # levels of nested repos for multitenancy. The default depth is 0 (singletenant server)
    DEPTH: 0
    # show debug messages
    DEBUG: false
    # output structured logs as json
    LOG_JSON: true
    # disable use of index-cache.yaml
    DISABLE_STATEFILES: false
    # disable Prometheus metrics
    DISABLE_METRICS: false
    # disable all routes prefixed with /api
    DISABLE_API: false
    # allow chart versions to be re-uploaded
    ALLOW_OVERWRITE: true
    # absolute url for .tgzs in index.yaml
    CHART_URL: https://charts.company.com/api
    # allow anonymous GET operations when auth is used
    AUTH_ANONYMOUS_GET: false
    # sets the base context path
    CONTEXT_PATH: /api
    # parallel scan limit for the repo indexer
    INDEX_LIMIT: 0
    # cache store, can be one of: redis (leave blank for inmemory cache)
    CACHE: redis
    # address of Redis service (host:port)
    CACHE_REDIS_ADDR: redis.cloud:6379
    # Redis database to be selected after connect
    CACHE_REDIS_DB: 0
    # enable bearer auth
    BEARER_AUTH: false
    # auth realm used for bearer auth
    AUTH_REALM:
    # auth service used for bearer auth
    AUTH_SERVICE:
  field:
  # POD_IP: status.podIP
  secret:
    # aws access key id value
    AWS_ACCESS_KEY_ID:
    # aws access key secret value
    AWS_SECRET_ACCESS_KEY:
    # username for basic http authentication
    BASIC_AUTH_USER:
    # password for basic http authentication
    BASIC_AUTH_PASS:
    # GCP service account json file
    GOOGLE_CREDENTIALS_JSON:
    # Redis requirepass server configuration
    CACHE_REDIS_PASSWORD:
  # Name of an existing secret to get the secret values from
  existingSecret: "chart-secrets"
  # Stores Enviromnt Variable to secret key name mappings
  existingSecretMappings:
    # aws access key id value
    AWS_ACCESS_KEY_ID: "AWS_ACCESS_KEY_ID"
    # aws access key secret value
    AWS_SECRET_ACCESS_KEY: "AWS_SECRET_ACCESS_KEY"
    # username for basic http authentication
    BASIC_AUTH_USER: "BASIC_AUTH_USER"
    # password for basic http authentication
    BASIC_AUTH_PASS: "BASIC_AUTH_PASS"
    # GCP service account json file
    GOOGLE_CREDENTIALS_JSON:
    # Redis requirepass server configuration
    CACHE_REDIS_PASSWORD: "CACHE_REDIS_PASSWORD"

deployment:
  ## Chartmuseum Deployment annotations
  annotations: {}
  #   name: value
  labels: {}
  #   name: value
  matchlabes: {}
  #   name: value
replica:
  ## Chartmuseum Replicas annotations
  annotations: {}
  ## Read more about kube2iam to provide access to s3 https://github.com/jtblin/kube2iam
  #   iam.amazonaws.com/role: role-arn
service:
  servicename:
  type: ClusterIP
  externalTrafficPolicy: Local
  # clusterIP: None
  externalPort: 80
  nodePort: 8080
  annotations: {}
  labels: {}

resources: {}
#  limits:
#    cpu: 100m
#    memory: 128Mi
#  requests:
#    cpu: 80m
#    memory: 64Mi

probes:
  liveness:
    initialDelaySeconds: 5
    periodSeconds: 10
    timeoutSeconds: 1
    successThreshold: 1
    failureThreshold: 3
  readiness:
    initialDelaySeconds: 5
    periodSeconds: 10
    timeoutSeconds: 1
    successThreshold: 1
    failureThreshold: 3

serviceAccount:
  create: true
  name:

# UID/GID 1000 is the default user "chartmuseum" used in
# the container image starting in v0.8.0 and above. This
# is required for local persistant storage. If your cluster
# does not allow this, try setting securityContext: {}
securityContext:
  fsGroup: 1000

nodeSelector: {}

tolerations: []

affinity: {}

persistence:
  enabled: true
  accessMode: ReadWriteOnce
  size: 8Gi
  labels: {}
  #   name: value
  ## A manually managed Persistent Volume and Claim
  ## Requires persistence.enabled: true
  ## If defined, PVC must be created manually before volume will be bound
  # existingClaim:

  ## Chartmuseum data Persistent Volume Storage Class
  ## If defined, storageClassName: <storageClass>
  ## If set to "-", storageClassName: "", which disables dynamic provisioning
  ## If undefined (the default) or set to null, no storageClassName spec is
  ##   set, choosing the default provisioner.  (gp2 on AWS, standard on
  ##   GKE, AWS & OpenStack)
  ##
  storageClass: "gp2"
  # volumeName:
  pv:
    enabled: false
    pvname:
    capacity:
      storage: 8Gi
    accessMode: ReadWriteOnce
    nfs:
      server:
      path:

## Ingress for load balancer
ingress:
  enabled: true
  annotations:
    kubernetes.io/ingress.class: "nginx"
    nginx.org/redirect-to-https: "True"
    ingress.kubernetes.io/ssl-redirect: "True"
#    external-dns.alpha.kubernetes.io/hostname: "charts.company.com"
  hosts:
    - name: charts.company.com
      path: /api
## Chartmuseum Ingress labels
##
#   labels:
#     dns: "route53"

## Chartmuseum Ingress annotations
##
#   annotations:
#     kubernetes.io/ingress.class: nginx
#     kubernetes.io/tls-acme: "true"

## Chartmuseum Ingress hostnames
## Must be provided if Ingress is enabled
##
#  hosts:
#    - name: chartmuseum.domain1.com
#      path: /
#      tls: false
#    - name: chartmuseum.domain2.com
#      path: /
#
#      ## Set this to true in order to enable TLS on the ingress record
#      tls: true
#
#      ## If TLS is set to true, you must declare what secret will store the key/certificate for TLS
#      ## Secrets must be added manually to the namespace
#      tlsSecret: chartmuseum.domain2-tls

# Adding secrets to tiller is not a great option, so If you want to use an existing
# secret that contains the json file, you can use the following entries
gcp:
  secret:
    enabled: false
    # Name of the secret that contains the encoded json
    name:
    # Secret key that holds the json value.
    key: credentials.json
oracle:
  secret:
    enabled: false
    # Name of the secret that contains the encoded config and key
    name:
    # Secret key that holds the oci config
    config: config
    # Secret key that holds the oci private key
    key_file: key_file
bearerAuth:
  secret:
    enabled: false
    publicKeySecret: chartmuseum-public-key

Web Chart Configue

replicaCount: 1
strategy:
  type: RollingUpdate
  rollingUpdate:
    maxUnavailable: 0
image:
  repository: chartmuseum/chartmuseum
  tag: v0.12.0
  pullPolicy: IfNotPresent
env:
  open:
    # storage backend, can be one of: local, alibaba, amazon, google, microsoft, oracle
    STORAGE: amazon
    # oss bucket to store charts for alibaba storage backend
    STORAGE_ALIBABA_BUCKET:
    # prefix to store charts for alibaba storage backend
    STORAGE_ALIBABA_PREFIX:
    # oss endpoint to store charts for alibaba storage backend
    STORAGE_ALIBABA_ENDPOINT:
    # server side encryption algorithm for alibaba storage backend, can be one
    # of: AES256 or KMS
    STORAGE_ALIBABA_SSE:
    # s3 bucket to store charts for amazon storage backend
    STORAGE_AMAZON_BUCKET: charts.company.com
    # prefix to store charts for amazon storage backend
    STORAGE_AMAZON_PREFIX: web
    # region of s3 bucket to store charts
    STORAGE_AMAZON_REGION: us-east-1
    # alternative s3 endpoint
    STORAGE_AMAZON_ENDPOINT:
    # server side encryption algorithm
    STORAGE_AMAZON_SSE:
    # gcs bucket to store charts for google storage backend
    STORAGE_GOOGLE_BUCKET:
    # prefix to store charts for google storage backend
    STORAGE_GOOGLE_PREFIX:
    # container to store charts for microsoft storage backend
    STORAGE_MICROSOFT_CONTAINER:
    # prefix to store charts for microsoft storage backend
    STORAGE_MICROSOFT_PREFIX:
    # container to store charts for openstack storage backend
    STORAGE_OPENSTACK_CONTAINER:
    # prefix to store charts for openstack storage backend
    STORAGE_OPENSTACK_PREFIX:
    # region of openstack container
    STORAGE_OPENSTACK_REGION:
    # path to a CA cert bundle for your openstack endpoint
    STORAGE_OPENSTACK_CACERT:
    # compartment id for for oracle storage backend
    STORAGE_ORACLE_COMPARTMENTID:
    # oci bucket to store charts for oracle storage backend
    STORAGE_ORACLE_BUCKET:
    # prefix to store charts for oracle storage backend
    STORAGE_ORACLE_PREFIX:
    # form field which will be queried for the chart file content
    CHART_POST_FORM_FIELD_NAME: chart
    # form field which will be queried for the provenance file content
    PROV_POST_FORM_FIELD_NAME: prov
    # levels of nested repos for multitenancy. The default depth is 0 (singletenant server)
    DEPTH: 0
    # show debug messages
    DEBUG: false
    # output structured logs as json
    LOG_JSON: true
    # disable use of index-cache.yaml
    DISABLE_STATEFILES: false
    # disable Prometheus metrics
    DISABLE_METRICS: false
    # disable all routes prefixed with /api
    DISABLE_API: false
    # allow chart versions to be re-uploaded
    ALLOW_OVERWRITE: true
    # absolute url for .tgzs in index.yaml
    CHART_URL: https://charts.company.com/web
    # allow anonymous GET operations when auth is used
    AUTH_ANONYMOUS_GET: false
    # sets the base context path
    CONTEXT_PATH: /web
    # parallel scan limit for the repo indexer
    INDEX_LIMIT: 0
    # cache store, can be one of: redis (leave blank for inmemory cache)
    CACHE: redis
    # address of Redis service (host:port)
    CACHE_REDIS_ADDR: redis.cloud:6379
    # Redis database to be selected after connect
    CACHE_REDIS_DB: 0
    # enable bearer auth
    BEARER_AUTH: false
    # auth realm used for bearer auth
    AUTH_REALM:
    # auth service used for bearer auth
    AUTH_SERVICE:
  field:
  # POD_IP: status.podIP
  secret:
    # aws access key id value
    AWS_ACCESS_KEY_ID:
    # aws access key secret value
    AWS_SECRET_ACCESS_KEY:
    # username for basic http authentication
    BASIC_AUTH_USER:
    # password for basic http authentication
    BASIC_AUTH_PASS:
    # GCP service account json file
    GOOGLE_CREDENTIALS_JSON:
    # Redis requirepass server configuration
    CACHE_REDIS_PASSWORD:
  # Name of an existing secret to get the secret values from
  existingSecret: "chart-secrets"
  # Stores Enviromnt Variable to secret key name mappings
  existingSecretMappings:
    # aws access key id value
    AWS_ACCESS_KEY_ID: "AWS_ACCESS_KEY_ID"
    # aws access key secret value
    AWS_SECRET_ACCESS_KEY: "AWS_SECRET_ACCESS_KEY"
    # username for basic http authentication
    BASIC_AUTH_USER: "BASIC_AUTH_USER"
    # password for basic http authentication
    BASIC_AUTH_PASS: "BASIC_AUTH_PASS"
    # GCP service account json file
    GOOGLE_CREDENTIALS_JSON:
    # Redis requirepass server configuration
    CACHE_REDIS_PASSWORD: "CACHE_REDIS_PASSWORD"

deployment:
  ## Chartmuseum Deployment annotations
  annotations: {}
  #   name: value
  labels: {}
  #   name: value
  matchlabes: {}
  #   name: value
replica:
  ## Chartmuseum Replicas annotations
  annotations: {}
  ## Read more about kube2iam to provide access to s3 https://github.com/jtblin/kube2iam
  #   iam.amazonaws.com/role: role-arn
service:
  servicename:
  type: ClusterIP
  externalTrafficPolicy: Local
  # clusterIP: None
  externalPort: 80
  nodePort: 8080
  annotations: {}
  labels: {}

resources: {}
#  limits:
#    cpu: 100m
#    memory: 128Mi
#  requests:
#    cpu: 80m
#    memory: 64Mi

probes:
  liveness:
    initialDelaySeconds: 5
    periodSeconds: 10
    timeoutSeconds: 1
    successThreshold: 1
    failureThreshold: 3
  readiness:
    initialDelaySeconds: 5
    periodSeconds: 10
    timeoutSeconds: 1
    successThreshold: 1
    failureThreshold: 3

serviceAccount:
  create: true
  name:

# UID/GID 1000 is the default user "chartmuseum" used in
# the container image starting in v0.8.0 and above. This
# is required for local persistant storage. If your cluster
# does not allow this, try setting securityContext: {}
securityContext:
  fsGroup: 1000

nodeSelector: {}

tolerations: []

affinity: {}

persistence:
  enabled: true
  accessMode: ReadWriteOnce
  size: 8Gi
  labels: {}
  #   name: value
  ## A manually managed Persistent Volume and Claim
  ## Requires persistence.enabled: true
  ## If defined, PVC must be created manually before volume will be bound
  # existingClaim:

  ## Chartmuseum data Persistent Volume Storage Class
  ## If defined, storageClassName: <storageClass>
  ## If set to "-", storageClassName: "", which disables dynamic provisioning
  ## If undefined (the default) or set to null, no storageClassName spec is
  ##   set, choosing the default provisioner.  (gp2 on AWS, standard on
  ##   GKE, AWS & OpenStack)
  ##
  storageClass: "gp2"
  # volumeName:
  pv:
    enabled: false
    pvname:
    capacity:
      storage: 8Gi
    accessMode: ReadWriteOnce
    nfs:
      server:
      path:

## Ingress for load balancer
ingress:
  enabled: true
  annotations:
    kubernetes.io/ingress.class: "nginx"
    nginx.org/redirect-to-https: "True"
    ingress.kubernetes.io/ssl-redirect: "True"
#    external-dns.alpha.kubernetes.io/hostname: "charts.company.com"
  hosts:
    - name: charts.company.com
      path: /web
## Chartmuseum Ingress labels
##
#   labels:
#     dns: "route53"

## Chartmuseum Ingress annotations
##
#   annotations:
#     kubernetes.io/ingress.class: nginx
#     kubernetes.io/tls-acme: "true"

## Chartmuseum Ingress hostnames
## Must be provided if Ingress is enabled
##
#  hosts:
#    - name: chartmuseum.domain1.com
#      path: /
#      tls: false
#    - name: chartmuseum.domain2.com
#      path: /
#
#      ## Set this to true in order to enable TLS on the ingress record
#      tls: true
#
#      ## If TLS is set to true, you must declare what secret will store the key/certificate for TLS
#      ## Secrets must be added manually to the namespace
#      tlsSecret: chartmuseum.domain2-tls

# Adding secrets to tiller is not a great option, so If you want to use an existing
# secret that contains the json file, you can use the following entries
gcp:
  secret:
    enabled: false
    # Name of the secret that contains the encoded json
    name:
    # Secret key that holds the json value.
    key: credentials.json
oracle:
  secret:
    enabled: false
    # Name of the secret that contains the encoded config and key
    name:
    # Secret key that holds the oci config
    config: config
    # Secret key that holds the oci private key
    key_file: key_file
bearerAuth:
  secret:
    enabled: false
    publicKeySecret: chartmuseum-public-key

any suggestion would be greatly appreciated

scbizu commented 4 years ago

It is actually DEPTH=1 ?

iamaverrick commented 4 years ago

but can we personalized the folders? instead of having org1/repoa, and org2/repob etc if so how will this get done. we are looking to have /api , /web

jdolitsky commented 4 years ago

As @scbizu mentioned above, DEPTH=1 I believe is what you are looking for to get /api , /web. The name can be whatever you want, they will resolved even if no charts exist there

iamaverrick commented 4 years ago

@scbizu , @jdolitsky im sorry but this doesn't work for me, i have done as mentioned above which causes the chart to become unresponsive

Error: looks like "https://charts.company.com" is not a valid chart repository or cannot be reached: error converting YAML to JSON: yaml: line 7: mapping values are not allowed in this context

this is me specifying DEPTH=1 when using DEPTH=0 it works just fine

iamaverrick commented 4 years ago

@scbizu , @jdolitsky im sorry you are correct after adjusting a couple of things i managed to get it to work as you said. one quick note for those who run into this same issue make sure to point your chart deployment to the correct folder or there naming convention i.g https://charts.company.com/api, https://charts.company.com/web

Thank you @scbizu , @jdolitsky for your support