TykTechnologies / tyk-helm-chart

A Helm chart repository to install Tyk Pro (with Dashboard), Tyk Hybrid or Tyk Headless chart.
https://tyk.io
71 stars 55 forks source link

Tyk Pro - (Re-Bootstrapping and Chart NOT Allowing for Empty Redis Password) #210

Closed klaus385 closed 2 years ago

klaus385 commented 2 years ago

We are trying to migrate from the Helm Chart maintained fork as mentioned on #64. However, we encounter two problems when doing so and pointing the deployment with the same values to use this upstream chart.

Issues

  1. Error stating ->
time="Jun 15 22:22:51" level=warning msg="Reconnecting storage: Redis is either down or ws not configured" prefix=pub-sub

Even though the Redis instance is accessible from the location where the Tyk Gateway is deployed. Also, when we update our fork to allow for an empty password for the connection to Redis we do NOT see this error. This leads me to think that even when I set redisPass in the values to ""it's NOT allowing for Redis connection without a password.

  1. Re-Bootstrapping ->

What I mean by this is that the following shown in the screenshot is prompted to me as if the currently deployed setup for Tyk-Pro wasn't already bootstrapped.

Screen Shot 2022-06-13 at 12 48 10 PM

Current Configuration

Using Tyk-Operator, Tyk-Pro 0.8.2, and Tyk Pro Self-Managing License.

Tyk Pro Values

bootstrap: false
portal:
  bootstrap: false
  path: "/portal"
redis:
  enableCluster: true
dash:
  bootstrap: false
  resources:
    limits:
      cpu: 500m
      memory: 512M
    requests:
      cpu: 500m
      memory: 512M
  service:
    type: NodePort
    port: 3000
  ingress:
    enabled: true
    annotations:
      {
        kubernetes.io/ingress.class: alb,
        alb.ingress.kubernetes.io/scheme: internet-facing,
        alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}]',
      }
    path: /*
  extraEnvs:
    - name: TYK_DB_ENABLEDELETEKEYBYHASH
      value: "true"
    - name: TYK_DB_ENABLEUPDATEKEYBYHASH
      value: "true"
    - name: TYK_DB_ENABLEHASHEDKEYSLISTING
      value: "true"
    - name: TYK_DB_HOSTCONFIG_GENERATEHTTPS
      value: "true"

gateway:
  sharding:
    enabled: true
  kind: Deployment
  tls: false
  extraEnvs:
    - name: TYK_GW_ENABLEHASHEDKEYSLISTING
      value: "true"
    - name: TYK_GW_POLICIES_ALLOWEXPLICITPOLICYID
      value: "true"
  resources:
    limits:
      cpu: 1000m
      memory: 1G
    requests:
      cpu: 1000m
      memory: 1G
  ingress:
    enabled: true
    annotations:
      {
        kubernetes.io/ingress.class: alb,
        alb.ingress.kubernetes.io/scheme: internet-facing,
        alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}]',
        alb.ingress.kubernetes.io/load-balancer-attributes: idle_timeout.timeout_seconds=300,
      }
    path: /*
  service:
    type: NodePort
    port: 8080
  affinity:
    podAntiAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        - labelSelector:
            matchExpressions:
              - key: app
                operator: In
                values:
                  - gateway-tyk-pro
          topologyKey: kubernetes.io/hostname

pump:
  resources:
    limits:
      cpu: 200m
      memory: 128Mi
    requests:
      cpu: 200m
      memory: 128Mi
  extraEnvs:
    - name: HOST_IP
      valueFrom:
        fieldRef:
          fieldPath: status.hostIP
    - name: TYK_PMP_PUMPS_DOGSTATSD_TYPE
      value: "dogstatsd"
    - name: TYK_PMP_PUMPS_DOGSTATSD_META_ADDRESS
      value: "$(HOST_IP):8125"
    - name: TYK_PMP_PUMPS_DOGSTATSD_META_NAMESPACE
      value: "tykpump"
    - name: TYK_PMP_PUMPS_DOGSTATSD_META_ASYNCUDS
      value: "true"
    - name: TYK_PMP_PUMPS_DOGSTATSD_META_ASYNCUDSWRITETIMEOUTSECONDS
      value: "2"
    - name: TYK_PMP_PUMPS_DOGSTATSD_META_BUFFERED
      value: "true"
    - name: TYK_PMP_PUMPS_DOGSTATSD_META_BUFFEREDMAXMESSAGES
      value: "32"
    - name: TYK_PMP_PUMPS_DOGSTATSD_META_SAMPLERATE
      value: "1"
    - name: TYK_PMP_UPTIMEPUMPCONFIG_MONGOURL
      valueFrom:
        secretKeyRef:
          key: mongoURL
          name: secrets-tyk-pro
    - name: TYK_PMP_MONGO_MONGOURL
      valueFrom:
        secretKeyRef:
          key: mongoURL
          name: secrets-tyk-pro
    - name: TYK_PMP_MONGOAGG_MONGOURL
      valueFrom:
        secretKeyRef:
          key: mongoURL
          name: secrets-tyk-pro

Tyk Operator Values

replicaCount:
  1
  # loads enviroment variables to the operator.
envFrom:
  - secretRef:
      name: tyk-operator-conf
envVars:
  - name: TYK_HTTPS_INGRESS_PORT
    value: "8443"
  - name: TYK_HTTP_INGRESS_PORT
    value: "8080"

image:
  repository: tykio/tyk-operator
  pullPolicy: IfNotPresent
  tag: "latest"

imagePullSecrets: []
nameOverride: ""
fullnameOverride: ""

annotations: {}
podAnnotations: {}
podSecurityContext:
  allowPrivilegeEscalation: false
resources: {}

#specify necessary resources for kube-rbac-proxy container
rbac:
  resources: {}
  # specify custom/internal repo name for kube-rbac-proxy container
  image:
    repository: gcr.io/kubebuilder/kube-rbac-proxy
    pullPolicy: IfNotPresent
    tag: "v0.8.0"

Look forward to hearing back and hopefully getting these issues resolved.

buraksekili commented 2 years ago

Hi @klaus385, thank you for raising this issue!

For the issue 1: Can you please provide details (e.g, redisAddr field, namespace of your Redis deployment) to us in order to reproduce error? Also, can you please comment out .redis.pass field instead of setting it to an empty string?

  # Redis password
  # If you're using Bitnami Redis chart please input your password in the field below
  # pass: ""

For Re-Bootstrapping issue: We are working on our bootstrap scripts to prevent such errors. We have open PRs regarding bootstrap enhancement and are hoping to release them soon! There might be a couple of reasons for re-boostrapping issue:

klaus385 commented 2 years ago

@buraksekili I tried deploying and not providing a .redis.pass at all and the bootstrapping is still trying to be executed. As for mongo what would those duplicate records be and in the interim how would you suggest resolving to be able to proceed? In relation to the Kubernetes jobs, it doesn't appear to run one at all.

buraksekili commented 2 years ago

I just realized that according to Tyk Pro values.yaml that you sent, .Values.bootstrap field is set to false which means that bootstrapping is disabled. Can you please retry after setting it to true? Please delete the preceding jobs if they exist. Also, it'd be better if you have a fresh Mongo installation to prevent duplicate record errors. You may want to try using simple-redis and simple-mongo, as described here , to see that everything works.

In the meantime, I will try to reproduce this error and inform you about possible solutions regarding Mongo and other stuff. Again, thank you!

klaus385 commented 2 years ago

@buraksekili thanks again for looking into this. I wanted to say that after changing the chart version to be 0.9.5 and not setting redis.pass that I was no longer getting the previous bootstrap screen.

Since we were on 0.8.2 we were trying to minimize the updates for the chart deployed to a minimum. Though it seems from 0.9.0 to 0.9.5 that those implemented changes to the chart resolved our issue.

With that being said I appreciate the feedback given thus far and it did help us proceed in migrating to this maintained upstream chart.