bitnami / charts

Bitnami Helm Charts
https://bitnami.com
Other
8.6k stars 8.98k forks source link

[bitnami/rabbitmq] When use load definitions and restart pods password of user account changes. #5829

Closed esteban1983cl closed 3 years ago

esteban1983cl commented 3 years ago

Which chart: The name (and version) of the affected chart name: rabbitmq version: 8.11.3

Describe the bug I'm trying to use load_definitions to set a policy via extraSecrets helm chart field.

extraSecrets:
  load-definition:
    load_definition.json: |
      {
        "vhosts": [
          {
            "name": "/"
          }
        ],
        "policies": [
          {
            "name": "ha-all",
            "pattern": ".*\..*",
            "vhost": "/",
            "definition": {
              "ha-mode": "all"
            }
          }
        ]
      }

To Reproduce Steps to reproduce the behavior:

  1. Install chart version enabling load definitions feature with that configuration and 3 replicas
  2. Wait for running state
  3. Delete pods kubectl delete po -l app.kubernetes.io/instance=rabbitmq -n <namespace> --force --grace-period=0
  4. Check the logs kubectl logs -f -l app.kubernetes.io/instance=rabbitmq -n <namespace>
  5. Check if client's can connect

Expected behavior load definitions feature doesn't affect credentials.

Version of Helm and Kubernetes:

version.BuildInfo{Version:"v3.5.2", GitCommit:"167aac70832d3a384f65f9745335e9fb40169dc2", GitTreeState:"dirty", GoVersion:"go1.15.7"}
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.15", GitCommit:"73dd5c840662bb066a146d0871216333181f4b64", GitTreeState:"clean", BuildDate:"2021-01-13T13:22:41Z", GoVersion:"go1.13.15", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"13+", GitVersion:"v1.13.12-eks-3e38fc", GitCommit:"3e38fcc260dba9935ce1b2e9343f9d232938339a", GitTreeState:"clean", BuildDate:"2020-05-23T05:53:01Z", GoVersion:"go1.11.13", Compiler:"gc", Platform:"linux/amd64"}

Additional context I tried to persist the credentials using the following load_definitions file but with not success:

{
  "vhosts": [
      {
        "name": "/"
      }
    ],
    "users": [
        {
          "name": "{{ .Values.auth.username }}",
          "password": "{{ .Values.auth.password }}",
          "tags": "administrator"
        }
    ],
    "policies": [
        {
          "name": "ha-all",
          "pattern": ".*\..*",
          "vhost": "/",
          "definition": {
            "ha-mode": "all"
          }
        }
    ]
}
javsalgar commented 3 years ago

Hi,

I was unable to reproduce the issue. I deployed the chart with these values:

❯ cat /tmp/values.yaml
extraSecrets:
  load-definition:
    load_definition.json: |
      {
        "vhosts": [
          {
            "name": "/"
          }
        ],
        "policies": [
          {
            "name": "ha-all",
            "pattern": ".*\..*",
            "vhost": "/",
            "definition": {
              "ha-mode": "all"
            }
          }
        ]
      }

Then entered the container and authenticated without issues

I have no name!@rabet-rabbitmq-0:/$ rabbitmqctl authenticate_user user $RABBITMQ_PASSWORD
Authenticating user "user" ...
Success

Then deleted the pod and tried again

❯ kubectl delete pod rabet-rabbitmq-0
pod "rabet-rabbitmq-0" deleted

... After restarting 

I have no name!@rabet-rabbitmq-0:/$ rabbitmqctl authenticate_user user $RABBITMQ_PASSWORD
Authenticating user "user" ...
Success

Is there any other thing you changed in the configuration?

esteban1983cl commented 3 years ago

Hi again, thanks for your help, I forgot share other details:

Install chart using this command line

helm upgrade --install
        --namespace ${NAMESPACE} rabbitmq bitnami/rabbitmq
        -f ./values-${CI_ENVIRONMENT_NAME}.yaml
        --set auth.password=${RABBITMQ_PASSWORD}
        --set auth.erlangCookie=${RABBITMQ_ERLANG_COOKIE}
        --set ingress.hostname=rabbitmq-sai.${DOMAIN}
        --version ${CHART_VERSION}
values.yaml using this values.yaml ```yaml ## Global Docker image parameters ## Please, note that this will override the image parameters, including dependencies, configured to use the global value ## Current available global Docker image parameters: imageRegistry and imagePullSecrets ## global: # imageRegistry: myRegistryName # imagePullSecrets: # - docker-pull-secrets # storageClass: myStorageClass ## Bitnami RabbitMQ image version ## ref: https://hub.docker.com/r/bitnami/rabbitmq/tags/ ## image: registry: docker.io repository: bitnami/rabbitmq tag: 3.8.14-debian-10-r0 ## set to true if you would like to see extra information on logs ## It turns BASH and/or NAMI debugging in the image ## debug: false ## Specify a imagePullPolicy ## Defaults to 'Always' if image tag is 'latest', else set to 'IfNotPresent' ## ref: http://kubernetes.io/docs/user-guide/images/#pre-pulling-images ## pullPolicy: IfNotPresent ## Optionally specify an array of imagePullSecrets. ## Secrets must be manually created in the namespace. ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/ ## # pullSecrets: # - myRegistryKeySecretName ## String to partially override rabbitmq.fullname template (will maintain the release name) ## # nameOverride: ## String to fully override rabbitmq.fullname template ## # fullnameOverride: ## Force target Kubernetes version (using Helm capabilites if not set) ## kubeVersion: ## Kubernetes Cluster Domain ## clusterDomain: cluster.local ## Deployment pod host aliases ## https://kubernetes.io/docs/concepts/services-networking/add-entries-to-pod-etc-hosts-with-host-aliases/ ## hostAliases: [] ## RabbitMQ Authentication parameters ## auth: ## RabbitMQ application username ## ref: https://github.com/bitnami/bitnami-docker-rabbitmq#environment-variables ## username: user ## RabbitMQ application password ## ref: https://github.com/bitnami/bitnami-docker-rabbitmq#environment-variables ## # password: # existingPasswordSecret: name-of-existing-secret ## Erlang cookie to determine whether different nodes are allowed to communicate with each other ## ref: https://github.com/bitnami/bitnami-docker-rabbitmq#environment-variables ## # erlangCookie: # existingErlangSecret: name-of-existing-secret ## Enable encryption to rabbitmq ## ref: https://www.rabbitmq.com/ssl.html ## tls: enabled: false failIfNoPeerCert: true sslOptionsVerify: verify_peer caCertificate: |- serverCertificate: |- serverKey: |- # existingSecret: name-of-existing-secret-to-rabbitmq existingSecretFullChain: false ## Value for the RABBITMQ_LOGS environment variable ## ref: https://www.rabbitmq.com/logging.html#log-file-location ## logs: '-' ## RabbitMQ Max File Descriptors ## ref: https://github.com/bitnami/bitnami-docker-rabbitmq#environment-variables ## ref: https://www.rabbitmq.com/install-debian.html#kernel-resource-limits ## ulimitNofiles: '65536' ## RabbitMQ maximum available scheduler threads and online scheduler threads. By default it will create a thread per CPU detected, with the following parameters you can tune it manually. ## ref: https://hamidreza-s.github.io/erlang/scheduling/real-time/preemptive/migration/2016/02/09/erlang-scheduler-details.html#scheduler-threads ## ref: https://github.com/bitnami/charts/issues/2189 ## # maxAvailableSchedulers: 2 # onlineSchedulers: 1 ## The memory threshold under which RabbitMQ will stop reading from client network sockets, in order to avoid being killed by the OS ## ref: https://www.rabbitmq.com/alarms.html ## ref: https://www.rabbitmq.com/memory.html#threshold ## memoryHighWatermark: enabled: true ## Memory high watermark type. Either absolute or relative ## type: 'relative' ## Memory high watermark value. ## The default value of 0.4 stands for 40% of available RAM ## Note: the memory relative limit is applied to the resource.limits.memory to calculate the memory threshold ## You can also use an absolute value, e.g.: 256MB ## value: 0.4 ## Plugins to enable ## plugins: 'rabbitmq_management rabbitmq_peer_discovery_k8s' ## Community plugins to download during container initialization. ## Combine it with extraPlugins to also enable them. ## # communityPlugins: ## Extra plugins to enable ## Use this instead of `plugins` to add new plugins ## extraPlugins: 'rabbitmq_auth_backend_ldap' ## Clustering settings ## clustering: addressType: hostname ## Rebalance master for queues in cluster when new replica is created ## ref: https://www.rabbitmq.com/rabbitmq-queues.8.html#rebalance ## rebalance: false ## forceBoot: executes 'rabbitmqctl force_boot' to force boot cluster shut down unexpectedly in an ## unknown order. ## ref: https://www.rabbitmq.com/rabbitmqctl.8.html#force_boot ## forceBoot: false ## Loading a RabbitMQ definitions file to configure RabbitMQ ## loadDefinition: enabled: true ## Can be templated if needed, e.g. ## existingSecret: "{{ .Release.Name }}-load-definition" ## existingSecret: load-definition ## Command and args for running the container (set to default if not set). Use array form ## # command: # args: ## Default duration in seconds k8s waits for container to exit before sending kill signal. Any time in excess of ## 10 seconds will be spent waiting for any synchronization necessary for cluster not to lose data. ## terminationGracePeriodSeconds: 120 ## Additional environment variables to set ## E.g: ## extraEnvVars: ## - name: FOO ## value: BAR ## extraEnvVars: [] ## ConfigMap with extra environment variables ## # extraEnvVarsCM: ## Secret with extra environment variables ## # extraEnvVarsSecret: ## Extra ports to be included in container spec, primarily informational ## E.g: ## extraContainerPorts: ## - name: new_port_name ## containerPort: 1234 ## extraContainerPorts: [] ## Configuration file content: required cluster configuration ## Do not override unless you know what you are doing. ## To add more configuration, use `extraConfiguration` of `advancedConfiguration` instead ## configuration: |- ## Username and password ## default_user = {{ .Values.auth.username }} default_pass = CHANGEME ## Clustering ## cluster_formation.peer_discovery_backend = rabbit_peer_discovery_k8s cluster_formation.k8s.host = kubernetes.default.svc.{{ .Values.clusterDomain }} cluster_formation.node_cleanup.interval = 10 cluster_formation.node_cleanup.only_log_warning = true cluster_partition_handling = autoheal # queue master locator queue_master_locator = min-masters # enable guest user loopback_users.guest = false {{ tpl .Values.extraConfiguration . }} {{- if .Values.auth.tls.enabled }} ssl_options.verify = {{ .Values.auth.tls.sslOptionsVerify }} listeners.ssl.default = {{ .Values.service.tlsPort }} ssl_options.fail_if_no_peer_cert = {{ .Values.auth.tls.failIfNoPeerCert }} ssl_options.cacertfile = /opt/bitnami/rabbitmq/certs/ca_certificate.pem ssl_options.certfile = /opt/bitnami/rabbitmq/certs/server_certificate.pem ssl_options.keyfile = /opt/bitnami/rabbitmq/certs/server_key.pem {{- end }} {{- if .Values.ldap.enabled }} auth_backends.1 = rabbit_auth_backend_ldap auth_backends.2 = internal {{- range $index, $server := .Values.ldap.servers }} auth_ldap.servers.{{ add $index 1 }} = {{ $server }} {{- end }} auth_ldap.port = {{ .Values.ldap.port }} auth_ldap.user_dn_pattern = {{ .Values.ldap.user_dn_pattern }} {{- if .Values.ldap.tls.enabled }} auth_ldap.use_ssl = true {{- end }} {{- end }} {{- if .Values.metrics.enabled }} ## Prometheus metrics ## prometheus.tcp.port = 9419 {{- end }} {{- if .Values.memoryHighWatermark.enabled }} ## Memory Threshold ## total_memory_available_override_value = {{ include "rabbitmq.toBytes" .Values.resources.limits.memory }} vm_memory_high_watermark.{{ .Values.memoryHighWatermark.type }} = {{ .Values.memoryHighWatermark.value }} {{- end }} ## Configuration file content: extra configuration ## Use this instead of `configuration` to add more configuration ## extraConfiguration: |- #default_vhost = {{ .Release.Namespace }}-vhost #disk_free_limit.absolute = 50MB load_definitions = /app/load_definition.json ## Configuration file content: advanced configuration ## Use this as additional configuration in classic config format (Erlang term configuration format) ## ## If you set LDAP with TLS/SSL enabled and you are using self-signed certificates, uncomment these lines. ## advancedConfiguration: |- ## [{ ## rabbitmq_auth_backend_ldap, ## [{ ## ssl_options, ## [{ ## verify, verify_none ## }, { ## fail_if_no_peer_cert, ## false ## }] ## ]} ## }]. ## advancedConfiguration: |- ## LDAP configuration ## ldap: enabled: false ## List of LDAP servers hostnames ## servers: [] ## LDAP servers port ## port: '389' ## Pattern used to translate the provided username into a value to be used for the LDAP bind ## ref: https://www.rabbitmq.com/ldap.html#usernames-and-dns ## user_dn_pattern: cn=${username},dc=example,dc=org tls: ## If you enabled TLS/SSL you can set advaced options using the advancedConfiguration parameter. ## enabled: false ## extraVolumes and extraVolumeMounts allows you to mount other volumes ## Examples: ## extraVolumeMounts: ## - name: extras ## mountPath: /usr/share/extras ## readOnly: true ## extraVolumes: ## - name: extras ## emptyDir: {} ## extraVolumeMounts: [] extraVolumes: [] ## Optionally specify extra secrets to be created by the chart. ## This can be useful when combined with load_definitions to automatically create the secret containing the definitions to be loaded. ## Example: ## extraSecrets: ## load-definition: ## load_definition.json: | ## { ## ... ## } ## ## Set this flag to true if extraSecrets should be created with prepended. ## extraSecretsPrependReleaseName: false extraSecrets: load-definition: load_definition.json: | { "vhosts": [ { "name": "/" } ], "users": [ { "name": "{{ .Values.auth.username }}", "password": "{{ .Values.auth.password }}", "tags": "administrator" } ], "policies": [ { "name": "ha-all", "pattern": ".*\..*", "vhost": "/", "definition": { "ha-mode": "all" } } ] } ## Number of RabbitMQ replicas to deploy ## replicaCount: 3 ## Use an alternate scheduler, e.g. "stork". ## ref: https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/ ## # schedulerName: ## RabbitMQ should be initialized one by one when building cluster for the first time. ## Therefore, the default value of podManagementPolicy is 'OrderedReady' ## Once the RabbitMQ participates in the cluster, it waits for a response from another ## RabbitMQ in the same cluster at reboot, except the last RabbitMQ of the same cluster. ## If the cluster exits gracefully, you do not need to change the podManagementPolicy ## because the first RabbitMQ of the statefulset always will be last of the cluster. ## However if the last RabbitMQ of the cluster is not the first RabbitMQ due to a failure, ## you must change podManagementPolicy to 'Parallel'. ## ref : https://www.rabbitmq.com/clustering.html#restarting ## podManagementPolicy: OrderedReady ## Pod labels. Evaluated as a template ## Ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/ ## podLabels: k8s-app: rabbitmq-sai ## Pod annotations. Evaluated as a template ## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/ ## podAnnotations: {} ## updateStrategy for RabbitMQ statefulset ## ref: https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#update-strategies ## updateStrategyType: RollingUpdate ## Statefulset labels. Evaluated as a template ## Ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/ ## statefulsetLabels: {} ## Name of the priority class to be used by RabbitMQ pods, priority class needs to be created beforehand ## Ref: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/ ## priorityClassName: '' ## Pod affinity preset ## ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity ## Allowed values: soft, hard ## podAffinityPreset: "" ## Pod anti-affinity preset ## Ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity ## Allowed values: soft, hard ## podAntiAffinityPreset: soft ## Node affinity preset ## Ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity ## Allowed values: soft, hard ## nodeAffinityPreset: ## Node affinity type ## Allowed values: soft, hard ## type: "" ## Node label key to match ## E.g. ## key: "kubernetes.io/e2e-az-name" ## key: "" ## Node label values to match ## E.g. ## values: ## - e2e-az1 ## - e2e-az2 ## values: [] ## Affinity for pod assignment. Evaluated as a template ## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity ## Note: podAffinityPreset, podAntiAffinityPreset, and nodeAffinityPreset will be ignored when it's set ## affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - topologyKey: kubernetes.io/hostname preferredDuringSchedulingIgnoredDuringExecution: - podAffinityTerm: topologyKey: failure-domain.beta.kubernetes.io/zone weight: 100 ## Node labels for pod assignment. Evaluated as a template ## ref: https://kubernetes.io/docs/user-guide/node-selection/ ## nodeSelector: node-role.kubernetes.io/user: "true" ## Tolerations for pod assignment. Evaluated as a template ## Ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/ ## tolerations: [] ## Topology Spread Constraints for pod assignment spread across your cluster among failure-domains. Evaluated as a template ## Ref: https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/#spread-constraints-for-pods ## topologySpreadConstraints: {} ## RabbitMQ pods' Security Context ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-pod ## podSecurityContext: enabled: true fsGroup: 1001 runAsUser: 1001 ## RabbitMQ containers' Security Context ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-container ## Example: ## containerSecurityContext: ## capabilities: ## drop: ["NET_RAW"] ## readOnlyRootFilesystem: true ## containerSecurityContext: {} ## RabbitMQ containers' resource requests and limits ## ref: http://kubernetes.io/docs/user-guide/compute-resources/ ## resources: # We usually recommend not to specify default resources and to leave this as a conscious # choice for the user. This also increases chances charts run on environments with little # resources, such as Minikube. If you do want to specify resources, uncomment the following # lines, adjust them as necessary, and remove the curly braces after 'resources:'. limits: cpu: 1000m memory: 1Gi requests: cpu: 1000m memory: 500Mi ## RabbitMQ containers' liveness and readiness probes. ## ref: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#container-probes ## livenessProbe: enabled: true initialDelaySeconds: 120 timeoutSeconds: 20 periodSeconds: 30 failureThreshold: 6 successThreshold: 1 readinessProbe: enabled: true initialDelaySeconds: 10 timeoutSeconds: 20 periodSeconds: 30 failureThreshold: 3 successThreshold: 1 ## Custom Liveness probe ## customLivenessProbe: {} ## Custom Rediness probe ## customReadinessProbe: {} ## Custom Startup probe ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-startup-probes ## customStartupProbe: {} ## Add init containers to the pod ## Example: ## initContainers: ## - name: your-image-name ## image: your-image ## imagePullPolicy: Always ## ports: ## - name: portname ## containerPort: 1234 ## initContainers: {} ## Add sidecars to the pod. ## Example: ## sidecars: ## - name: your-image-name ## image: your-image ## imagePullPolicy: Always ## ports: ## - name: portname ## containerPort: 1234 ## sidecars: {} ## RabbitMQ pods ServiceAccount ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/ ## serviceAccount: ## Specifies whether a ServiceAccount should be created ## create: true ## The name of the ServiceAccount to use. ## If not set and create is true, a name is generated using the rabbitmq.fullname template ## # name: ## Role Based Access ## ref: https://kubernetes.io/docs/admin/authorization/rbac/ ## rbac: ## Whether RBAC rules should be created ## binding RabbitMQ ServiceAccount to a role ## that allows RabbitMQ pods querying the K8s API ## create: true persistence: ## this enables PVC templates that will create one per pod ## enabled: false ## rabbitmq data Persistent Volume Storage Class ## If defined, storageClassName: ## If set to "-", storageClassName: "", which disables dynamic provisioning ## If undefined (the default) or set to null, no storageClassName spec is ## set, choosing the default provisioner. (gp2 on AWS, standard on ## GKE, AWS & OpenStack) ## storageClass: "gp2" ## selector can be used to match an existing PersistentVolume ## selector: ## matchLabels: ## app: my-app selector: matchLabels: type: rabbitmq-pvc accessMode: ReadWriteOnce ## Existing PersistentVolumeClaims ## The value is evaluated as a template ## So, for example, the name can depend on .Release or .Chart # existingClaim: "" ## If you change this value, you might have to adjust `rabbitmq.diskFreeLimit` as well. ## size: 8Gi volumes: # - name: volume_name # emptyDir: {} ## Pod Disruption Budget configuration ## ref: https://kubernetes.io/docs/tasks/run-application/configure-pdb/ ## pdb: create: false ## Min number of pods that must still be available after the eviction ## minAvailable: 1 ## Max number of pods that can be unavailable after the eviction ## # maxUnavailable: 1 ## Network Policy configuration ## ref: https://kubernetes.io/docs/concepts/services-networking/network-policies/ ## networkPolicy: ## Enable creation of NetworkPolicy resources ## enabled: false ## The Policy model to apply. When set to false, only pods with the correct ## client label will have network access to the ports RabbitMQ is listening ## on. When true, RabbitMQ will accept connections from any source ## (with the correct destination port). ## allowExternal: true ## Additional NetworkPolicy Ingress "from" rules to set. Note that all rules are OR-ed. ## # additionalRules: # - matchLabels: # - role: frontend # - matchExpressions: # - key: role # operator: In # values: # - frontend ## Kubernetes service type ## service: type: ClusterIP ## Amqp port ## ref: https://github.com/bitnami/bitnami-docker-rabbitmq#environment-variables ## port: 5672 ## Amqp service port name ## portName: amqp ## Amqp Tls port ## tlsPort: 5671 ## Amqp Tls service port name ## tlsPortName: amqp-ssl ## Node port ## ref: https://github.com/bitnami/bitnami-docker-rabbitmq#environment-variables ## # nodePort: 30672 ## Node port Tls ## # tlsNodePort: 30671 ## Dist port ## ref: https://github.com/bitnami/bitnami-docker-rabbitmq#environment-variables ## distPort: 25672 ## Dist service port name ## distPortName: dist ## Node port (Manager) ## # distNodePort: 30676 ## RabbitMQ Manager port ## ref: https://github.com/bitnami/bitnami-docker-rabbitmq#environment-variables ## managerPortEnabled: true managerPort: 15672 ## RabbitMQ Manager service port name ## managerPortName: http-stats ## Node port (Manager) ## # managerNodePort: 30673 ## RabbitMQ Prometheues metrics port ## metricsPort: 9419 ## RabbitMQ Prometheues metrics service port name ## metricsPortName: metrics ## Node port for metrics ## # metricsNodePort: 30674 ## Node port for EPMD Discovery ## # epmdNodePort: 30675 ## Service port name for EPMD Discovery ## epmdPortName: epmd ## Extra ports to expose ## E.g.: ## extraPorts: ## - name: new_svc_name ## port: 1234 ## targetPort: 1234 ## extraPorts: [] ## Load Balancer sources ## https://kubernetes.io/docs/tasks/access-application-cluster/configure-cloud-provider-firewall/#restrict-access-for-loadbalancer-service ## # loadBalancerSourceRanges: # - 10.10.10.0/24 ## Set the ExternalIPs ## # externalIPs: ## Enable client source IP preservation ## ref http://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ip ## externalTrafficPolicy: Cluster ## Set the LoadBalancerIP ## # loadBalancerIP: ## Service labels. Evaluated as a template ## labels: {} ## Service annotations. Evaluated as a template ## Example: ## annotations: ## service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0 ## annotations: {} ## Headless Service annotations. Evaluated as a template ## Example: ## annotations: ## external-dns.alpha.kubernetes.io/internal-hostname: rabbitmq.example.com ## annotationsHeadless: {} ## Configure the ingress resource that allows you to access the ## RabbitMQ installation. Set up the URL ## ref: http://kubernetes.io/docs/user-guide/ingress/ ## ingress: ## Set to true to enable ingress record generation ## enabled: true ## Path for the default host. You may need to set this to '/*' in order to use this ## with ALB ingress controllers. ## path: / ## Ingress Path type ## pathType: ImplementationSpecific ## Set this to true in order to add the corresponding annotations for cert-manager ## certManager: false ## When the ingress is enabled, a host pointing to this will be created ## hostname: rabbitmq-sai.local ## Ingress annotations done as key:value pairs ## For a full list of possible ingress annotations, please see ## ref: https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/annotations.md ## ## If certManager is set to true, annotation kubernetes.io/tls-acme: "true" will automatically be set ## annotations: kubernetes.io/ingress.class: nginx-http-private ## Enable TLS configuration for the hostname defined at ingress.hostname parameter ## TLS certificates will be retrieved from a TLS secret with name: {{- printf "%s-tls" .Values.ingress.hostname }} ## or a custom one if you use the tls.existingSecret parameter ## You can use the ingress.secrets parameter to create this TLS secret or relay on cert-manager to create it ## tls: false ## existingSecret: name-of-existing-secret ## ## The list of additional hostnames to be covered with this ingress record. ## Most likely the hostname above will be enough, but in the event more hosts are needed, this is an array ## extraHosts: ## - name: rabbitmq.local ## path: / ## ## The tls configuration for additional hostnames to be covered with this ingress record. ## see: https://kubernetes.io/docs/concepts/services-networking/ingress/#tls ## extraTls: ## - hosts: ## - rabbitmq.local ## secretName: rabbitmq.local-tls ## ## If you're providing your own certificates, please use this to add the certificates as secrets ## key and certificate should start with -----BEGIN CERTIFICATE----- or ## -----BEGIN RSA PRIVATE KEY----- ## ## name should line up with a tlsSecret set further up ## If you're using cert-manager, this is unneeded, as it will create the secret for you if it is not set ## ## It is also possible to create and manage the certificates outside of this helm chart ## Please see README.md for more information ## secrets: [] ## - name: rabbitmq.local-tls ## key: ## certificate: ## ## Prometheus Metrics ## metrics: enabled: true plugins: 'rabbitmq_prometheus' ## Prometheus pod annotations ## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/ ## podAnnotations: prometheus.io/scrape: 'true' prometheus.io/port: '{{ .Values.service.metricsPort }}' ## Prometheus Service Monitor ## ref: https://github.com/coreos/prometheus-operator ## serviceMonitor: ## If the operator is installed in your cluster, set to true to create a Service Monitor Entry ## enabled: true ## Specify the namespace in which the serviceMonitor resource will be created ## namespace: "monitoring" ## Specify the interval at which metrics should be scraped ## interval: 30s ## Specify the timeout after which the scrape is ended ## # scrapeTimeout: 30s ## Specify Metric Relabellings to add to the scrape endpoint ## # relabellings: ## Specify honorLabels parameter to add the scrape endpoint ## honorLabels: false ## Specify the release for ServiceMonitor. Sometimes it should be custom for prometheus operator to work ## release: "prometheus-operator" ## Used to pass Labels that are used by the Prometheus installed in your cluster to select Service Monitors to work with ## ref: https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#prometheusspec ## additionalLabels: release: prometheus-operator ## Custom PrometheusRule to be defined ## The value is evaluated as a template, so, for example, the value can depend on .Release or .Chart ## ref: https://github.com/coreos/prometheus-operator#customresourcedefinitions ## prometheusRule: enabled: true additionalLabels: release: prometheus-operator namespace: 'monitoring' ## List of rules, used as template by Helm. ## These are just examples rules inspired from https://awesome-prometheus-alerts.grep.to/rules.html rules: - alert: RabbitmqDown expr: rabbitmq_up{service="{{ template "rabbitmq.fullname" . }}"} == 0 for: 5m labels: severity: error annotations: summary: Rabbitmq down (instance {{ "{{ $labels.instance }}" }}) description: RabbitMQ node down - alert: ClusterDown expr: | sum(rabbitmq_running{service="{{ template "rabbitmq.fullname" . }}"}) < {{ .Values.replicaCount }} for: 5m labels: severity: error annotations: summary: Cluster down (instance {{ "{{ $labels.instance }}" }}) description: | Less than {{ .Values.replicaCount }} nodes running in RabbitMQ cluster VALUE = {{ "{{ $value }}" }} - alert: ClusterPartition expr: rabbitmq_partitions{service="{{ template "rabbitmq.fullname" . }}"} > 0 for: 5m labels: severity: error annotations: summary: Cluster partition (instance {{ "{{ $labels.instance }}" }}) description: | Cluster partition VALUE = {{ "{{ $value }}" }} - alert: OutOfMemory expr: | rabbitmq_node_mem_used{service="{{ template "rabbitmq.fullname" . }}"} / rabbitmq_node_mem_limit{service="{{ template "rabbitmq.fullname" . }}"} * 100 > 90 for: 5m labels: severity: warning annotations: summary: Out of memory (instance {{ "{{ $labels.instance }}" }}) description: | Memory available for RabbmitMQ is low (< 10%)\n VALUE = {{ "{{ $value }}" }} LABELS: {{ "{{ $labels }}" }} - alert: TooManyConnections expr: rabbitmq_connectionsTotal{service="{{ template "rabbitmq.fullname" . }}"} > 1000 for: 5m labels: severity: warning annotations: summary: Too many connections (instance {{ "{{ $labels.instance }}" }}) description: | RabbitMQ instance has too many connections (> 1000) VALUE = {{ "{{ $value }}" }}\n LABELS: {{ "{{ $labels }}" }} #rules: [] ## Init Container parameters ## Change the owner and group of the persistent volume(s) mountpoint(s) to 'runAsUser:fsGroup' on each component ## values from the securityContext section of the component ## volumePermissions: enabled: false ## Bitnami Minideb image ## ref: https://hub.docker.com/r/bitnami/minideb/tags/ ## image: registry: docker.io repository: bitnami/bitnami-shell tag: "10" ## Specify a imagePullPolicy ## Defaults to 'Always' if image tag is 'latest', else set to 'IfNotPresent' ## ref: http://kubernetes.io/docs/user-guide/images/#pre-pulling-images ## pullPolicy: IfNotPresent ## Optionally specify an array of imagePullSecrets (secrets must be manually created in the namespace) ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/ ## Example: ## pullSecrets: ## - myRegistryKeySecretName ## ## pullSecrets: ## - docker-pull-secrets ## Init Container resource requests and limits ## ref: http://kubernetes.io/docs/user-guide/compute-resources/ ## resources: # We usually recommend not to specify default resources and to leave this as a conscious # choice for the user. This also increases chances charts run on environments with little # resources, such as Minikube. If you do want to specify resources, uncomment the following # lines, adjust them as necessary, and remove the curly braces after 'resources:'. limits: {} # cpu: 100m # memory: 128Mi requests: {} # cpu: 100m # memory: 128Mi ```
javsalgar commented 3 years ago

Just to confirm something. Does the password CHANGEME work? In case it is trying to use that authentication

esteban1983cl commented 3 years ago

Hello, with that configuration I get invalid credentials:

2021-03-19 14:31:45.006 [warning] <0.959.0> HTTP access denied: user 'user' - invalid credentials 2021-03-19 14:31:45.008 [warning] <0.961.0> HTTP access denied: user 'user' - invalid credentials 2021-03-19 14:31:45.009 [warning] <0.963.0> HTTP access denied: user 'user' - invalid credentials 2021-03-19 14:31:45.010 [warning] <0.965.0> HTTP access denied: user 'user' - invalid credentials 2021-03-19 14:31:45.013 [warning] <0.967.0> HTTP access denied: user 'user' - invalid credentials 2021-03-19 14:31:51.109 [warning] <0.964.0> HTTP access denied: user 'user' - invalid credentials

And error during starup in rabbitmq and many restarts

rabbitmq 14:31:12.59 ERROR ==> Couldn't change password for user 'user'. rabbitmq 14:31:12.60 INFO ==> Stopping RabbitMQ... rabbitmq 14:31:13.40 ERROR ==> Couldn't change password for user 'user'. rabbitmq 14:31:13.40 INFO ==> Stopping RabbitMQ... rabbitmq 14:31:17.70 ERROR ==> Couldn't change password for user 'user'. rabbitmq 14:31:17.70 INFO ==> Stopping RabbitMQ...

rabbitmq 14:31:22.43 Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-rabbitmq/issues rabbitmq 14:31:22.43 rabbitmq 14:31:22.43 INFO ==> Starting RabbitMQ setup rabbitmq 14:31:22.44 INFO ==> Validating settings in RABBITMQ_* env vars.. rabbitmq 14:31:22.46 INFO ==> Initializing RabbitMQ... rabbitmq 14:31:22.48 INFO ==> Persisted data detected. Restoring... rabbitmq 14:31:22.49 INFO ==> RabbitMQ setup finished!

rabbitmq 14:31:22.50 INFO ==> Starting RabbitMQ

javsalgar commented 3 years ago

We should see what the issue with the password change error is. Could relaunch the chart with --set image.debug=true? This should provide more insight about the error that appears.

esteban1983cl commented 3 years ago

We should see what the issue with the password change error is. Could relaunch the chart with --set image.debug=true? This should provide more insight about the error that appears.

Log after update load definitions ```shell rabbitmq 13:38:21.57 rabbitmq 13:38:21.58 Welcome to the Bitnami rabbitmq container rabbitmq 13:38:21.58 Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-rabbitmq rabbitmq 13:38:21.58 Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-rabbitmq/issues rabbitmq 13:38:21.58 rabbitmq 13:38:21.58 INFO ==> ** Starting RabbitMQ setup ** rabbitmq 13:38:21.60 INFO ==> Validating settings in RABBITMQ_* env vars.. rabbitmq 13:38:21.61 INFO ==> Initializing RabbitMQ... rabbitmq 13:38:21.62 DEBUG ==> Creating environment file... rabbitmq 13:38:21.62 DEBUG ==> Creating enabled_plugins file... rabbitmq 13:38:21.63 DEBUG ==> Creating Erlang cookie... rabbitmq 13:38:21.63 DEBUG ==> Ensuring expected directories/files exist... rabbitmq 13:38:21.64 INFO ==> Starting RabbitMQ in background... Waiting for erlang distribution on node 'rabbit@rabbitmq-2.rabbitmq-headless.sai.svc.cluster.local' while OS process '45' is running Configuring logger redirection Waiting for applications 'rabbit_and_plugins' to start on node 'rabbit@rabbitmq-2.rabbitmq-headless.sai.svc.cluster.local' 2021-03-22 13:38:27.068 [debug] <0.301.0> Lager installed handler error_logger_lager_h into error_logger 2021-03-22 13:38:27.078 [debug] <0.322.0> Lager installed handler lager_forwarder_backend into rabbit_log_ldap_lager_event 2021-03-22 13:38:27.078 [debug] <0.319.0> Lager installed handler lager_forwarder_backend into rabbit_log_federation_lager_event 2021-03-22 13:38:27.078 [debug] <0.304.0> Lager installed handler lager_forwarder_backend into error_logger_lager_event 2021-03-22 13:38:27.078 [debug] <0.307.0> Lager installed handler lager_forwarder_backend into rabbit_log_lager_event 2021-03-22 13:38:27.078 [debug] <0.310.0> Lager installed handler lager_forwarder_backend into rabbit_log_channel_lager_event 2021-03-22 13:38:27.078 [debug] <0.313.0> Lager installed handler lager_forwarder_backend into rabbit_log_connection_lager_event 2021-03-22 13:38:27.078 [debug] <0.316.0> Lager installed handler lager_forwarder_backend into rabbit_log_feature_flags_lager_event 2021-03-22 13:38:27.079 [debug] <0.325.0> Lager installed handler lager_forwarder_backend into rabbit_log_mirroring_lager_event 2021-03-22 13:38:27.081 [debug] <0.328.0> Lager installed handler lager_forwarder_backend into rabbit_log_prelaunch_lager_event 2021-03-22 13:38:27.082 [debug] <0.331.0> Lager installed handler lager_forwarder_backend into rabbit_log_queue_lager_event 2021-03-22 13:38:27.084 [debug] <0.334.0> Lager installed handler lager_forwarder_backend into rabbit_log_ra_lager_event 2021-03-22 13:38:27.087 [debug] <0.337.0> Lager installed handler lager_forwarder_backend into rabbit_log_shovel_lager_event 2021-03-22 13:38:27.088 [debug] <0.340.0> Lager installed handler lager_forwarder_backend into rabbit_log_upgrade_lager_event 2021-03-22 13:38:27.166 [info] <0.44.0> Application lager started on node 'rabbit@rabbitmq-2.rabbitmq-headless.sai.svc.cluster.local' 2021-03-22 13:38:27.476 [info] <0.44.0> Application mnesia started on node 'rabbit@rabbitmq-2.rabbitmq-headless.sai.svc.cluster.local' 2021-03-22 13:38:27.477 [info] <0.273.0> Starting RabbitMQ 3.8.14 on Erlang 22.3 Copyright (c) 2007-2021 VMware, Inc. or its affiliates. Licensed under the MPL 2.0. Website: https://rabbitmq.com ## ## RabbitMQ 3.8.14 ## ## ########## Copyright (c) 2007-2021 VMware, Inc. or its affiliates. ###### ## ########## Licensed under the MPL 2.0. Website: https://rabbitmq.com Doc guides: https://rabbitmq.com/documentation.html Support: https://rabbitmq.com/contact.html Tutorials: https://rabbitmq.com/getstarted.html Monitoring: https://rabbitmq.com/monitoring.html Logs: Config file(s): /opt/bitnami/rabbitmq/etc/rabbitmq/rabbitmq.conf Starting broker...2021-03-22 13:38:27.478 [info] <0.273.0> node : rabbit@rabbitmq-2.rabbitmq-headless.sai.svc.cluster.local home dir : /opt/bitnami/rabbitmq/.rabbitmq config file(s) : /opt/bitnami/rabbitmq/etc/rabbitmq/rabbitmq.conf cookie hash : RxahyJ80werAKhDjXt6lvA== log(s) : database dir : /bitnami/rabbitmq/mnesia/rabbit@rabbitmq-2.rabbitmq-headless.sai.svc.cluster.local 2021-03-22 13:38:27.567 [debug] <0.297.0> Lager installed handler lager_backend_throttle into lager_event 2021-03-22 13:38:28.572 [info] <0.273.0> Feature flags: list of feature flags found: 2021-03-22 13:38:28.572 [info] <0.273.0> Feature flags: [ ] drop_unroutable_metric 2021-03-22 13:38:28.572 [info] <0.273.0> Feature flags: [ ] empty_basic_get_metric 2021-03-22 13:38:28.572 [info] <0.273.0> Feature flags: [ ] implicit_default_bindings 2021-03-22 13:38:28.573 [info] <0.273.0> Feature flags: [ ] maintenance_mode_status 2021-03-22 13:38:28.573 [info] <0.273.0> Feature flags: [ ] quorum_queue 2021-03-22 13:38:28.573 [info] <0.273.0> Feature flags: [ ] user_limits 2021-03-22 13:38:28.573 [info] <0.273.0> Feature flags: [ ] virtual_host_metadata 2021-03-22 13:38:28.573 [info] <0.273.0> Feature flags: feature flag states written to disk: yes 2021-03-22 13:38:32.023 [info] <0.273.0> Running boot step pre_boot defined by app rabbit 2021-03-22 13:38:32.023 [info] <0.273.0> Running boot step rabbit_core_metrics defined by app rabbit 2021-03-22 13:38:32.023 [info] <0.273.0> Running boot step rabbit_alarm defined by app rabbit 2021-03-22 13:38:32.027 [info] <0.418.0> Memory high watermark set to 409 MiB (429496729 bytes) of 1024 MiB (1073741824 bytes) total 2021-03-22 13:38:32.031 [info] <0.420.0> Enabling free disk space monitoring 2021-03-22 13:38:32.031 [info] <0.420.0> Disk free limit set to 50MB 2021-03-22 13:38:32.035 [info] <0.273.0> Running boot step code_server_cache defined by app rabbit 2021-03-22 13:38:32.035 [info] <0.273.0> Running boot step file_handle_cache defined by app rabbit 2021-03-22 13:38:32.035 [info] <0.423.0> Limiting to approx 65439 file handles (58893 sockets) 2021-03-22 13:38:32.035 [info] <0.424.0> FHC read buffering: OFF 2021-03-22 13:38:32.035 [info] <0.424.0> FHC write buffering: ON 2021-03-22 13:38:32.036 [info] <0.273.0> Running boot step worker_pool defined by app rabbit 2021-03-22 13:38:32.036 [info] <0.363.0> Will use 8 processes for default worker pool 2021-03-22 13:38:32.036 [info] <0.363.0> Starting worker pool 'worker_pool' with 8 processes in it 2021-03-22 13:38:32.037 [info] <0.273.0> Running boot step database defined by app rabbit 2021-03-22 13:38:32.037 [info] <0.273.0> Node database directory at /bitnami/rabbitmq/mnesia/rabbit@rabbitmq-2.rabbitmq-headless.sai.svc.cluster.local is empty. Assuming we need to join an existing cluster or initialise from scratch... 2021-03-22 13:38:32.038 [info] <0.273.0> Configured peer discovery backend: rabbit_peer_discovery_k8s 2021-03-22 13:38:32.038 [info] <0.273.0> Will try to lock with peer discovery backend rabbit_peer_discovery_k8s 2021-03-22 13:38:32.038 [info] <0.273.0> Peer discovery backend does not support locking, falling back to randomized delay 2021-03-22 13:38:32.038 [info] <0.273.0> Peer discovery backend rabbit_peer_discovery_k8s supports registration. 2021-03-22 13:38:32.038 [info] <0.273.0> Will wait for 457 milliseconds before proceeding with registration... 2021-03-22 13:38:32.566 [info] <0.273.0> k8s endpoint listing returned nodes not yet ready: rabbitmq-2 2021-03-22 13:38:32.566 [info] <0.273.0> All discovered existing cluster peers: rabbit@rabbitmq-0.rabbitmq-headless.sai.svc.cluster.local, rabbit@rabbitmq-1.rabbitmq-headless.sai.svc.cluster.local 2021-03-22 13:38:32.566 [info] <0.273.0> Peer nodes we can cluster with: rabbit@rabbitmq-0.rabbitmq-headless.sai.svc.cluster.local, rabbit@rabbitmq-1.rabbitmq-headless.sai.svc.cluster.local 2021-03-22 13:38:32.587 [info] <0.273.0> Node 'rabbit@rabbitmq-0.rabbitmq-headless.sai.svc.cluster.local' selected for auto-clustering 2021-03-22 13:38:32.619 [info] <0.273.0> Waiting for Mnesia tables for 30000 ms, 9 retries left 2021-03-22 13:38:32.687 [info] <0.273.0> Successfully synced tables from a peer 2021-03-22 13:38:32.688 [info] <0.273.0> Waiting for Mnesia tables for 30000 ms, 9 retries left 2021-03-22 13:38:32.688 [info] <0.273.0> Successfully synced tables from a peer 2021-03-22 13:38:32.688 [warning] <0.273.0> Feature flags: the previous instance of this node must have failed to write the `feature_flags` file at `/bitnami/rabbitmq/mnesia/rabbit@rabbitmq-2.rabbitmq-headless.sai.svc.cluster.local-feature_flags`: 2021-03-22 13:38:32.688 [warning] <0.273.0> Feature flags: - list of previously enabled feature flags now marked as such: [maintenance_mode_status,quorum_queue,user_limits,virtual_host_metadata] 2021-03-22 13:38:32.698 [info] <0.273.0> Feature flags: list of feature flags found: 2021-03-22 13:38:32.698 [info] <0.273.0> Feature flags: [ ] drop_unroutable_metric 2021-03-22 13:38:32.698 [info] <0.273.0> Feature flags: [ ] empty_basic_get_metric 2021-03-22 13:38:32.698 [info] <0.273.0> Feature flags: [ ] implicit_default_bindings 2021-03-22 13:38:32.698 [info] <0.273.0> Feature flags: [x] maintenance_mode_status 2021-03-22 13:38:32.698 [info] <0.273.0> Feature flags: [x] quorum_queue 2021-03-22 13:38:32.698 [info] <0.273.0> Feature flags: [x] user_limits 2021-03-22 13:38:32.698 [info] <0.273.0> Feature flags: [x] virtual_host_metadata 2021-03-22 13:38:32.698 [info] <0.273.0> Feature flags: feature flag states written to disk: yes 2021-03-22 13:38:32.709 [info] <0.273.0> Feature flag `drop_unroutable_metric`: mark as enabled=true 2021-03-22 13:38:32.720 [info] <0.273.0> Feature flags: list of feature flags found: 2021-03-22 13:38:32.720 [info] <0.273.0> Feature flags: [x] drop_unroutable_metric 2021-03-22 13:38:32.720 [info] <0.273.0> Feature flags: [ ] empty_basic_get_metric 2021-03-22 13:38:32.720 [info] <0.273.0> Feature flags: [ ] implicit_default_bindings 2021-03-22 13:38:32.720 [info] <0.273.0> Feature flags: [x] maintenance_mode_status 2021-03-22 13:38:32.720 [info] <0.273.0> Feature flags: [x] quorum_queue 2021-03-22 13:38:32.720 [info] <0.273.0> Feature flags: [x] user_limits 2021-03-22 13:38:32.720 [info] <0.273.0> Feature flags: [x] virtual_host_metadata 2021-03-22 13:38:32.720 [info] <0.273.0> Feature flags: feature flag states written to disk: yes 2021-03-22 13:38:32.729 [info] <0.273.0> Feature flag `empty_basic_get_metric`: mark as enabled=true 2021-03-22 13:38:32.740 [info] <0.273.0> Feature flags: list of feature flags found: 2021-03-22 13:38:32.740 [info] <0.273.0> Feature flags: [x] drop_unroutable_metric 2021-03-22 13:38:32.740 [info] <0.273.0> Feature flags: [x] empty_basic_get_metric 2021-03-22 13:38:32.740 [info] <0.273.0> Feature flags: [ ] implicit_default_bindings 2021-03-22 13:38:32.740 [info] <0.273.0> Feature flags: [x] maintenance_mode_status 2021-03-22 13:38:32.740 [info] <0.273.0> Feature flags: [x] quorum_queue 2021-03-22 13:38:32.740 [info] <0.273.0> Feature flags: [x] user_limits 2021-03-22 13:38:32.740 [info] <0.273.0> Feature flags: [x] virtual_host_metadata 2021-03-22 13:38:32.740 [info] <0.273.0> Feature flags: feature flag states written to disk: yes 2021-03-22 13:38:32.749 [info] <0.273.0> Waiting for Mnesia tables for 30000 ms, 0 retries left 2021-03-22 13:38:32.749 [info] <0.273.0> Successfully synced tables from a peer 2021-03-22 13:38:32.749 [info] <0.273.0> Feature flag `implicit_default_bindings`: deleting explicit default bindings for 12 queues (it may take some time)... 2021-03-22 13:38:32.761 [info] <0.273.0> Feature flag `implicit_default_bindings`: mark as enabled=true 2021-03-22 13:38:32.772 [info] <0.273.0> Feature flags: list of feature flags found: 2021-03-22 13:38:32.772 [info] <0.273.0> Feature flags: [x] drop_unroutable_metric 2021-03-22 13:38:32.772 [info] <0.273.0> Feature flags: [x] empty_basic_get_metric 2021-03-22 13:38:32.772 [info] <0.273.0> Feature flags: [x] implicit_default_bindings 2021-03-22 13:38:32.772 [info] <0.273.0> Feature flags: [x] maintenance_mode_status 2021-03-22 13:38:32.772 [info] <0.273.0> Feature flags: [x] quorum_queue 2021-03-22 13:38:32.772 [info] <0.273.0> Feature flags: [x] user_limits 2021-03-22 13:38:32.772 [info] <0.273.0> Feature flags: [x] virtual_host_metadata 2021-03-22 13:38:32.772 [info] <0.273.0> Feature flags: feature flag states written to disk: yes 2021-03-22 13:38:32.783 [info] <0.273.0> Waiting for Mnesia tables for 30000 ms, 9 retries left 2021-03-22 13:38:32.783 [info] <0.273.0> Successfully synced tables from a peer 2021-03-22 13:38:32.818 [info] <0.273.0> Waiting for Mnesia tables for 30000 ms, 9 retries left 2021-03-22 13:38:32.818 [info] <0.273.0> Successfully synced tables from a peer 2021-03-22 13:38:32.820 [info] <0.273.0> Setting up a table for connection tracking on this node: 'tracked_connection_on_node_rabbit@rabbitmq-2.rabbitmq-headless.sai.svc.cluster.local' 2021-03-22 13:38:32.822 [info] <0.273.0> Setting up a table for per-vhost connection counting on this node: 'tracked_connection_per_vhost_on_node_rabbit@rabbitmq-2.rabbitmq-headless.sai.svc.cluster.local' 2021-03-22 13:38:32.823 [info] <0.273.0> Setting up a table for per-user connection counting on this node: 'tracked_connection_table_per_user_on_node_rabbit@rabbitmq-2.rabbitmq-headless.sai.svc.cluster.local' 2021-03-22 13:38:32.824 [info] <0.273.0> Will register with peer discovery backend rabbit_peer_discovery_k8s 2021-03-22 13:38:32.834 [info] <0.273.0> Running boot step database_sync defined by app rabbit 2021-03-22 13:38:32.834 [info] <0.273.0> Running boot step feature_flags defined by app rabbit 2021-03-22 13:38:32.834 [info] <0.273.0> Running boot step codec_correctness_check defined by app rabbit 2021-03-22 13:38:32.834 [info] <0.273.0> Running boot step external_infrastructure defined by app rabbit 2021-03-22 13:38:32.834 [info] <0.273.0> Running boot step rabbit_registry defined by app rabbit 2021-03-22 13:38:32.835 [info] <0.273.0> Running boot step rabbit_auth_mechanism_cr_demo defined by app rabbit 2021-03-22 13:38:32.835 [info] <0.273.0> Running boot step rabbit_queue_location_random defined by app rabbit 2021-03-22 13:38:32.835 [info] <0.273.0> Running boot step rabbit_event defined by app rabbit 2021-03-22 13:38:32.835 [info] <0.273.0> Running boot step rabbit_auth_mechanism_amqplain defined by app rabbit 2021-03-22 13:38:32.835 [info] <0.273.0> Running boot step rabbit_auth_mechanism_plain defined by app rabbit 2021-03-22 13:38:32.836 [info] <0.273.0> Running boot step rabbit_exchange_type_direct defined by app rabbit 2021-03-22 13:38:32.836 [info] <0.273.0> Running boot step rabbit_exchange_type_fanout defined by app rabbit 2021-03-22 13:38:32.836 [info] <0.273.0> Running boot step rabbit_exchange_type_headers defined by app rabbit 2021-03-22 13:38:32.836 [info] <0.273.0> Running boot step rabbit_exchange_type_topic defined by app rabbit 2021-03-22 13:38:32.836 [info] <0.273.0> Running boot step rabbit_mirror_queue_mode_all defined by app rabbit 2021-03-22 13:38:32.836 [info] <0.273.0> Running boot step rabbit_mirror_queue_mode_exactly defined by app rabbit 2021-03-22 13:38:32.836 [info] <0.273.0> Running boot step rabbit_mirror_queue_mode_nodes defined by app rabbit 2021-03-22 13:38:32.836 [info] <0.273.0> Running boot step rabbit_priority_queue defined by app rabbit 2021-03-22 13:38:32.836 [info] <0.273.0> Priority queues enabled, real BQ is rabbit_variable_queue 2021-03-22 13:38:32.837 [info] <0.273.0> Running boot step rabbit_queue_location_client_local defined by app rabbit 2021-03-22 13:38:32.837 [info] <0.273.0> Running boot step rabbit_queue_location_min_masters defined by app rabbit 2021-03-22 13:38:32.837 [info] <0.273.0> Running boot step kernel_ready defined by app rabbit 2021-03-22 13:38:32.837 [info] <0.273.0> Running boot step ldap_pool defined by app rabbitmq_auth_backend_ldap 2021-03-22 13:38:32.837 [info] <0.363.0> Starting worker pool 'ldap_pool' with 64 processes in it 2021-03-22 13:38:32.842 [info] <0.273.0> Running boot step rabbit_sysmon_minder defined by app rabbit 2021-03-22 13:38:32.842 [info] <0.273.0> Running boot step rabbit_epmd_monitor defined by app rabbit 2021-03-22 13:38:32.843 [info] <0.648.0> epmd monitor knows us, inter-node communication (distribution) port: 25672 2021-03-22 13:38:32.843 [info] <0.273.0> Running boot step guid_generator defined by app rabbit 2021-03-22 13:38:32.846 [info] <0.273.0> Running boot step rabbit_node_monitor defined by app rabbit 2021-03-22 13:38:32.861 [info] <0.652.0> Starting rabbit_node_monitor 2021-03-22 13:38:32.861 [info] <0.273.0> Running boot step delegate_sup defined by app rabbit 2021-03-22 13:38:32.863 [info] <0.273.0> Running boot step rabbit_memory_monitor defined by app rabbit 2021-03-22 13:38:32.863 [info] <0.273.0> Running boot step core_initialized defined by app rabbit 2021-03-22 13:38:32.863 [info] <0.273.0> Running boot step upgrade_queues defined by app rabbit 2021-03-22 13:38:32.891 [info] <0.273.0> message_store upgrades: 1 to apply 2021-03-22 13:38:32.891 [info] <0.273.0> message_store upgrades: Applying rabbit_variable_queue:move_messages_to_vhost_store 2021-03-22 13:38:32.891 [info] <0.273.0> message_store upgrades: No durable queues found. Skipping message store migration 2021-03-22 13:38:32.891 [info] <0.273.0> message_store upgrades: Removing the old message store data 2021-03-22 13:38:32.892 [info] <0.273.0> message_store upgrades: All upgrades applied successfully 2021-03-22 13:38:32.920 [info] <0.273.0> Running boot step channel_tracking defined by app rabbit 2021-03-22 13:38:32.922 [info] <0.273.0> Setting up a table for channel tracking on this node: 'tracked_channel_on_node_rabbit@rabbitmq-2.rabbitmq-headless.sai.svc.cluster.local' 2021-03-22 13:38:32.924 [info] <0.273.0> Setting up a table for channel tracking on this node: 'tracked_channel_table_per_user_on_node_rabbit@rabbitmq-2.rabbitmq-headless.sai.svc.cluster.local' 2021-03-22 13:38:32.924 [info] <0.273.0> Running boot step rabbit_channel_tracking_handler defined by app rabbit 2021-03-22 13:38:32.924 [info] <0.273.0> Running boot step connection_tracking defined by app rabbit 2021-03-22 13:38:32.926 [info] <0.273.0> Setting up a table for connection tracking on this node: 'tracked_connection_on_node_rabbit@rabbitmq-2.rabbitmq-headless.sai.svc.cluster.local' 2021-03-22 13:38:32.927 [info] <0.273.0> Setting up a table for per-vhost connection counting on this node: 'tracked_connection_per_vhost_on_node_rabbit@rabbitmq-2.rabbitmq-headless.sai.svc.cluster.local' 2021-03-22 13:38:32.929 [info] <0.273.0> Setting up a table for per-user connection counting on this node: 'tracked_connection_table_per_user_on_node_rabbit@rabbitmq-2.rabbitmq-headless.sai.svc.cluster.local' 2021-03-22 13:38:32.929 [info] <0.273.0> Running boot step rabbit_connection_tracking_handler defined by app rabbit 2021-03-22 13:38:32.929 [info] <0.273.0> Running boot step rabbit_exchange_parameters defined by app rabbit 2021-03-22 13:38:32.929 [info] <0.273.0> Running boot step rabbit_mirror_queue_misc defined by app rabbit 2021-03-22 13:38:32.930 [info] <0.273.0> Running boot step rabbit_policies defined by app rabbit 2021-03-22 13:38:32.931 [info] <0.273.0> Running boot step rabbit_policy defined by app rabbit 2021-03-22 13:38:32.931 [info] <0.273.0> Running boot step rabbit_queue_location_validator defined by app rabbit 2021-03-22 13:38:32.931 [info] <0.273.0> Running boot step rabbit_quorum_memory_manager defined by app rabbit 2021-03-22 13:38:32.931 [info] <0.273.0> Running boot step rabbit_vhost_limit defined by app rabbit 2021-03-22 13:38:32.931 [info] <0.273.0> Running boot step rabbit_mgmt_reset_handler defined by app rabbitmq_management 2021-03-22 13:38:32.931 [info] <0.273.0> Running boot step rabbit_mgmt_db_handler defined by app rabbitmq_management_agent 2021-03-22 13:38:32.931 [info] <0.273.0> Management plugin: using rates mode 'basic' 2021-03-22 13:38:32.934 [info] <0.273.0> Running boot step recovery defined by app rabbit 2021-03-22 13:38:32.953 [info] <0.698.0> Making sure data directory '/bitnami/rabbitmq/mnesia/rabbit@rabbitmq-2.rabbitmq-headless.sai.svc.cluster.local/msg_stores/vhosts/628WB79CIFDYO9LJI6DKMI09L' for vhost '/' exists 2021-03-22 13:38:32.957 [info] <0.698.0> Starting message stores for vhost '/' 2021-03-22 13:38:32.957 [info] <0.702.0> Message store "628WB79CIFDYO9LJI6DKMI09L/msg_store_transient": using rabbit_msg_store_ets_index to provide index 2021-03-22 13:38:32.959 [info] <0.698.0> Started message store of type transient for vhost '/' 2021-03-22 13:38:32.959 [info] <0.706.0> Message store "628WB79CIFDYO9LJI6DKMI09L/msg_store_persistent": using rabbit_msg_store_ets_index to provide index 2021-03-22 13:38:32.960 [warning] <0.706.0> Message store "628WB79CIFDYO9LJI6DKMI09L/msg_store_persistent": rebuilding indices from scratch 2021-03-22 13:38:32.961 [info] <0.698.0> Started message store of type persistent for vhost '/' 2021-03-22 13:38:32.963 [info] <0.273.0> Running boot step empty_db_check defined by app rabbit 2021-03-22 13:38:32.963 [info] <0.273.0> Will not seed default virtual host and user: have definitions to load... 2021-03-22 13:38:32.963 [info] <0.273.0> Running boot step rabbit_looking_glass defined by app rabbit 2021-03-22 13:38:32.963 [info] <0.273.0> Running boot step rabbit_core_metrics_gc defined by app rabbit 2021-03-22 13:38:32.963 [info] <0.273.0> Running boot step background_gc defined by app rabbit 2021-03-22 13:38:32.964 [info] <0.273.0> Running boot step routing_ready defined by app rabbit 2021-03-22 13:38:32.964 [info] <0.273.0> Running boot step pre_flight defined by app rabbit 2021-03-22 13:38:32.964 [info] <0.273.0> Running boot step notify_cluster defined by app rabbit 2021-03-22 13:38:32.964 [info] <0.273.0> Running boot step networking defined by app rabbit 2021-03-22 13:38:32.964 [info] <0.273.0> Running boot step definition_import_worker_pool defined by app rabbit 2021-03-22 13:38:32.964 [info] <0.363.0> Starting worker pool 'definition_import_pool' with 8 processes in it 2021-03-22 13:38:32.964 [info] <0.652.0> rabbit on node 'rabbit@rabbitmq-1.rabbitmq-headless.sai.svc.cluster.local' up 2021-03-22 13:38:32.965 [info] <0.273.0> Running boot step cluster_name defined by app rabbit 2021-03-22 13:38:32.965 [info] <0.273.0> Running boot step direct_client defined by app rabbit 2021-03-22 13:38:32.965 [info] <0.273.0> Running boot step rabbit_management_load_definitions defined by app rabbitmq_management 2021-03-22 13:38:32.965 [info] <0.741.0> Resetting node maintenance status 2021-03-22 13:38:32.965 [info] <0.44.0> Application rabbit started on node 'rabbit@rabbitmq-2.rabbitmq-headless.sai.svc.cluster.local' 2021-03-22 13:38:32.972 [info] <0.652.0> rabbit on node 'rabbit@rabbitmq-0.rabbitmq-headless.sai.svc.cluster.local' up 2021-03-22 13:38:32.982 [info] <0.44.0> Application rabbitmq_management_agent started on node 'rabbit@rabbitmq-2.rabbitmq-headless.sai.svc.cluster.local' 2021-03-22 13:38:32.982 [info] <0.44.0> Application cowlib started on node 'rabbit@rabbitmq-2.rabbitmq-headless.sai.svc.cluster.local' 2021-03-22 13:38:32.982 [info] <0.44.0> Application cowboy started on node 'rabbit@rabbitmq-2.rabbitmq-headless.sai.svc.cluster.local' 2021-03-22 13:38:32.983 [info] <0.44.0> Application rabbitmq_web_dispatch started on node 'rabbit@rabbitmq-2.rabbitmq-headless.sai.svc.cluster.local' 2021-03-22 13:38:32.983 [info] <0.44.0> Application amqp_client started on node 'rabbit@rabbitmq-2.rabbitmq-headless.sai.svc.cluster.local' 2021-03-22 13:38:33.014 [info] <0.801.0> Management plugin: HTTP (non-TLS) listener started on port 15672 2021-03-22 13:38:33.014 [info] <0.907.0> Statistics database started. 2021-03-22 13:38:33.014 [info] <0.906.0> Starting worker pool 'management_worker_pool' with 3 processes in it 2021-03-22 13:38:33.015 [info] <0.44.0> Application rabbitmq_management started on node 'rabbit@rabbitmq-2.rabbitmq-headless.sai.svc.cluster.local' 2021-03-22 13:38:33.040 [info] <0.44.0> Application prometheus started on node 'rabbit@rabbitmq-2.rabbitmq-headless.sai.svc.cluster.local' 2021-03-22 13:38:33.040 [info] <0.44.0> Application eldap started on node 'rabbit@rabbitmq-2.rabbitmq-headless.sai.svc.cluster.local' 2021-03-22 13:38:33.040 [warning] <0.922.0> LDAP plugin loaded, but rabbit_auth_backend_ldap is not in the list of auth_backends. LDAP auth will not work. 2021-03-22 13:38:33.040 [info] <0.44.0> Application rabbitmq_auth_backend_ldap started on node 'rabbit@rabbitmq-2.rabbitmq-headless.sai.svc.cluster.local' 2021-03-22 13:38:33.041 [info] <0.928.0> Peer discovery: enabling node cleanup (will only log warnings). Check interval: 10 seconds. 2021-03-22 13:38:33.041 [info] <0.44.0> Application rabbitmq_peer_discovery_common started on node 'rabbit@rabbitmq-2.rabbitmq-headless.sai.svc.cluster.local' 2021-03-22 13:38:33.041 [info] <0.44.0> Application rabbitmq_peer_discovery_k8s started on node 'rabbit@rabbitmq-2.rabbitmq-headless.sai.svc.cluster.local' 2021-03-22 13:38:33.062 [info] <0.936.0> Prometheus metrics: HTTP (non-TLS) listener started on port 9419 2021-03-22 13:38:33.062 [info] <0.44.0> Application rabbitmq_prometheus started on node 'rabbit@rabbitmq-2.rabbitmq-headless.sai.svc.cluster.local' 2021-03-22 13:38:33.062 [info] <0.741.0> Applying definitions from file at '/app/load_definition.json' 2021-03-22 13:38:33.062 [info] <0.741.0> Asked to import definitions. Acting user: rmq-internal 2021-03-22 13:38:33.063 [info] <0.741.0> Importing concurrently 1 users... 2021-03-22 13:38:33.069 [info] <0.732.0> Successfully changed password for user 'user' 2021-03-22 13:38:33.075 [info] <0.732.0> Successfully set user tags for user 'user' to [administrator] 2021-03-22 13:38:33.075 [info] <0.741.0> Importing concurrently 1 vhosts... 2021-03-22 13:38:33.075 [info] <0.741.0> Importing sequentially 1 policies... 2021-03-22 13:38:33.092 [info] <0.741.0> Ready to start client connection listeners 2021-03-22 13:38:33.095 [info] <0.1059.0> started TCP listener on [::]:5672 completed with 7 plugins. 2021-03-22 13:38:34.073 [info] <0.741.0> Server startup complete; 7 plugins started. * rabbitmq_prometheus * rabbitmq_peer_discovery_k8s * rabbitmq_peer_discovery_common * rabbitmq_auth_backend_ldap * rabbitmq_management * rabbitmq_web_dispatch * rabbitmq_management_agent 2021-03-22 13:38:34.073 [info] <0.741.0> Resetting node maintenance status Applications 'rabbit_and_plugins' are running on node 'rabbit@rabbitmq-2.rabbitmq-headless.sai.svc.cluster.local' rabbitmq 13:38:34.17 DEBUG ==> Changing password for user 'user'... Changing password for user "user" ... 2021-03-22 13:38:34.882 [info] <0.1201.0> Successfully changed password for user 'user' rabbitmq 13:38:34.89 INFO ==> Stopping RabbitMQ... Stopping and halting node rabbit@rabbitmq-2.rabbitmq-headless.sai.svc.cluster.local ... 2021-03-22 13:38:35.621 [info] <0.1207.0> RabbitMQ is asked to stop... 2021-03-22 13:38:36.073 [info] <0.1207.0> Stopping RabbitMQ applications and their dependencies in the following order: rabbitmq_management amqp_client rabbitmq_prometheus rabbitmq_web_dispatch cowboy cowlib rabbitmq_management_agent rabbitmq_peer_discovery_k8s rabbitmq_peer_discovery_common rabbitmq_auth_backend_ldap rabbit rabbitmq_prelaunch rabbit_common prometheus sysmon_handler os_mon ra mnesia 2021-03-22 13:38:36.073 [info] <0.1207.0> Stopping application 'rabbitmq_management' 2021-03-22 13:38:36.076 [warning] <0.793.0> HTTP listener registry could not find context rabbitmq_management_tls 2021-03-22 13:38:36.078 [info] <0.1207.0> Stopping application 'amqp_client' 2021-03-22 13:38:36.078 [info] <0.44.0> Application rabbitmq_management exited with reason: stopped 2021-03-22 13:38:36.078 [info] <0.44.0> Application rabbitmq_management exited with reason: stopped 2021-03-22 13:38:36.079 [info] <0.1207.0> Stopping application 'rabbitmq_prometheus' 2021-03-22 13:38:36.079 [info] <0.44.0> Application amqp_client exited with reason: stopped 2021-03-22 13:38:36.079 [info] <0.44.0> Application amqp_client exited with reason: stopped 2021-03-22 13:38:36.082 [warning] <0.793.0> HTTP listener registry could not find context rabbitmq_prometheus_tls 2021-03-22 13:38:36.083 [info] <0.1207.0> Stopping application 'rabbitmq_web_dispatch' 2021-03-22 13:38:36.083 [info] <0.44.0> Application rabbitmq_prometheus exited with reason: stopped 2021-03-22 13:38:36.083 [info] <0.44.0> Application rabbitmq_prometheus exited with reason: stopped 2021-03-22 13:38:36.085 [info] <0.44.0> Application rabbitmq_web_dispatch exited with reason: stopped 2021-03-22 13:38:36.085 [info] <0.1207.0> Stopping application 'cowboy' 2021-03-22 13:38:36.085 [info] <0.44.0> Application rabbitmq_web_dispatch exited with reason: stopped 2021-03-22 13:38:36.086 [info] <0.1207.0> Stopping application 'cowlib' 2021-03-22 13:38:36.086 [info] <0.44.0> Application cowboy exited with reason: stopped 2021-03-22 13:38:36.086 [info] <0.44.0> Application cowboy exited with reason: stopped 2021-03-22 13:38:36.086 [info] <0.44.0> Application cowlib exited with reason: stopped 2021-03-22 13:38:36.086 [info] <0.1207.0> Stopping application 'rabbitmq_management_agent' 2021-03-22 13:38:36.086 [info] <0.44.0> Application cowlib exited with reason: stopped 2021-03-22 13:38:36.088 [info] <0.1207.0> Stopping application 'rabbitmq_peer_discovery_k8s' 2021-03-22 13:38:36.088 [info] <0.44.0> Application rabbitmq_management_agent exited with reason: stopped 2021-03-22 13:38:36.088 [info] <0.44.0> Application rabbitmq_management_agent exited with reason: stopped 2021-03-22 13:38:36.089 [info] <0.1207.0> Stopping application 'rabbitmq_peer_discovery_common' 2021-03-22 13:38:36.089 [info] <0.44.0> Application rabbitmq_peer_discovery_k8s exited with reason: stopped 2021-03-22 13:38:36.089 [info] <0.44.0> Application rabbitmq_peer_discovery_k8s exited with reason: stopped 2021-03-22 13:38:36.091 [info] <0.1207.0> Stopping application 'rabbitmq_auth_backend_ldap' 2021-03-22 13:38:36.091 [info] <0.44.0> Application rabbitmq_peer_discovery_common exited with reason: stopped 2021-03-22 13:38:36.091 [info] <0.44.0> Application rabbitmq_peer_discovery_common exited with reason: stopped 2021-03-22 13:38:36.092 [info] <0.1207.0> Stopping application 'rabbit' 2021-03-22 13:38:36.092 [info] <0.44.0> Application rabbitmq_auth_backend_ldap exited with reason: stopped 2021-03-22 13:38:36.092 [info] <0.44.0> Application rabbitmq_auth_backend_ldap exited with reason: stopped 2021-03-22 13:38:36.092 [info] <0.273.0> Will unregister with peer discovery backend rabbit_peer_discovery_k8s 2021-03-22 13:38:36.092 [info] <0.1059.0> stopped TCP listener on [::]:5672 2021-03-22 13:38:36.094 [info] <0.1208.0> Closing all connections in vhost '/' on node 'rabbit@rabbitmq-2.rabbitmq-headless.sai.svc.cluster.local' because the vhost is stopping 2021-03-22 13:38:36.094 [info] <0.706.0> Stopping message store for directory '/bitnami/rabbitmq/mnesia/rabbit@rabbitmq-2.rabbitmq-headless.sai.svc.cluster.local/msg_stores/vhosts/628WB79CIFDYO9LJI6DKMI09L/msg_store_persistent' 2021-03-22 13:38:36.099 [info] <0.706.0> Message store for directory '/bitnami/rabbitmq/mnesia/rabbit@rabbitmq-2.rabbitmq-headless.sai.svc.cluster.local/msg_stores/vhosts/628WB79CIFDYO9LJI6DKMI09L/msg_store_persistent' is stopped 2021-03-22 13:38:36.099 [info] <0.702.0> Stopping message store for directory '/bitnami/rabbitmq/mnesia/rabbit@rabbitmq-2.rabbitmq-headless.sai.svc.cluster.local/msg_stores/vhosts/628WB79CIFDYO9LJI6DKMI09L/msg_store_transient' 2021-03-22 13:38:36.103 [info] <0.702.0> Message store for directory '/bitnami/rabbitmq/mnesia/rabbit@rabbitmq-2.rabbitmq-headless.sai.svc.cluster.local/msg_stores/vhosts/628WB79CIFDYO9LJI6DKMI09L/msg_store_transient' is stopped 2021-03-22 13:38:36.111 [info] <0.44.0> Application rabbit exited with reason: stopped 2021-03-22 13:38:36.111 [info] <0.1207.0> Stopping application 'rabbitmq_prelaunch' 2021-03-22 13:38:36.111 [info] <0.44.0> Application rabbit exited with reason: stopped 2021-03-22 13:38:36.113 [info] <0.44.0> Application rabbitmq_prelaunch exited with reason: stopped 2021-03-22 13:38:36.113 [info] <0.1207.0> Stopping application 'rabbit_common' 2021-03-22 13:38:36.113 [info] <0.44.0> Application rabbitmq_prelaunch exited with reason: stopped 2021-03-22 13:38:36.113 [info] <0.44.0> Application rabbit_common exited with reason: stopped 2021-03-22 13:38:36.113 [info] <0.1207.0> Stopping application 'prometheus' 2021-03-22 13:38:36.114 [info] <0.44.0> Application rabbit_common exited with reason: stopped 2021-03-22 13:38:36.115 [info] <0.44.0> Application prometheus exited with reason: stopped 2021-03-22 13:38:36.115 [info] <0.1207.0> Stopping application 'sysmon_handler' 2021-03-22 13:38:36.115 [info] <0.44.0> Application prometheus exited with reason: stopped 2021-03-22 13:38:36.116 [info] <0.1207.0> Stopping application 'os_mon' 2021-03-22 13:38:36.116 [info] <0.44.0> Application sysmon_handler exited with reason: stopped 2021-03-22 13:38:36.116 [info] <0.44.0> Application sysmon_handler exited with reason: stopped 2021-03-22 13:38:36.117 [info] <0.1207.0> Stopping application 'ra' 2021-03-22 13:38:36.117 [info] <0.44.0> Application os_mon exited with reason: stopped 2021-03-22 13:38:36.117 [info] <0.44.0> Application os_mon exited with reason: stopped 2021-03-22 13:38:36.120 [info] <0.44.0> Application ra exited with reason: stopped 2021-03-22 13:38:36.120 [info] <0.1207.0> Stopping application 'mnesia' 2021-03-22 13:38:36.120 [info] <0.44.0> Application ra exited with reason: stopped Gracefully halting Erlang VM 2021-03-22 13:38:36.122 [info] <0.1207.0> Successfully stopped RabbitMQ and its dependencies 2021-03-22 13:38:36.122 [info] <0.1207.0> Halting Erlang VM with the following applications: eldap lager observer_cli stdout_formatter gen_batch_server aten ranch cuttlefish inets credentials_obfuscation recon jsx goldrush xmerl tools syntax_tools ssl public_key asn1 crypto compiler sasl stdlib kernel 2021-03-22 13:38:36.122 [info] <0.44.0> Application mnesia exited with reason: stopped 2021-03-22 13:38:36.122 [info] <0.44.0> Application mnesia exited with reason: stopped rabbitmq 13:38:38.17 INFO ==> ** RabbitMQ setup finished! ** rabbitmq 13:38:38.19 INFO ==> ** Starting RabbitMQ ** Configuring logger redirection 2021-03-22 13:38:47.902 [debug] <0.284.0> Lager installed handler error_logger_lager_h into error_logger 2021-03-22 13:38:47.967 [debug] <0.287.0> Lager installed handler lager_forwarder_backend into error_logger_lager_event 2021-03-22 13:38:47.967 [debug] <0.290.0> Lager installed handler lager_forwarder_backend into rabbit_log_lager_event 2021-03-22 13:38:47.967 [debug] <0.296.0> Lager installed handler lager_forwarder_backend into rabbit_log_connection_lager_event 2021-03-22 13:38:47.967 [debug] <0.293.0> Lager installed handler lager_forwarder_backend into rabbit_log_channel_lager_event 2021-03-22 13:38:47.967 [debug] <0.299.0> Lager installed handler lager_forwarder_backend into rabbit_log_feature_flags_lager_event 2021-03-22 13:38:47.967 [debug] <0.302.0> Lager installed handler lager_forwarder_backend into rabbit_log_federation_lager_event 2021-03-22 13:38:47.967 [debug] <0.305.0> Lager installed handler lager_forwarder_backend into rabbit_log_ldap_lager_event 2021-03-22 13:38:47.968 [debug] <0.308.0> Lager installed handler lager_forwarder_backend into rabbit_log_mirroring_lager_event 2021-03-22 13:38:47.970 [debug] <0.311.0> Lager installed handler lager_forwarder_backend into rabbit_log_prelaunch_lager_event 2021-03-22 13:38:47.972 [debug] <0.314.0> Lager installed handler lager_forwarder_backend into rabbit_log_queue_lager_event 2021-03-22 13:38:47.973 [debug] <0.317.0> Lager installed handler lager_forwarder_backend into rabbit_log_ra_lager_event 2021-03-22 13:38:47.976 [debug] <0.320.0> Lager installed handler lager_forwarder_backend into rabbit_log_shovel_lager_event 2021-03-22 13:38:47.979 [debug] <0.323.0> Lager installed handler lager_forwarder_backend into rabbit_log_upgrade_lager_event 2021-03-22 13:38:48.064 [info] <0.44.0> Application lager started on node 'rabbit@rabbitmq-2.rabbitmq-headless.sai.svc.cluster.local' 2021-03-22 13:38:48.403 [debug] <0.280.0> Lager installed handler lager_backend_throttle into lager_event 2021-03-22 13:38:51.809 [info] <0.44.0> Application mnesia started on node 'rabbit@rabbitmq-2.rabbitmq-headless.sai.svc.cluster.local' 2021-03-22 13:38:51.812 [info] <0.44.0> Application mnesia exited with reason: stopped 2021-03-22 13:38:51.812 [info] <0.44.0> Application mnesia exited with reason: stopped 2021-03-22 13:38:51.867 [info] <0.269.0> Feature flags: list of feature flags found: 2021-03-22 13:38:51.867 [info] <0.269.0> Feature flags: [x] drop_unroutable_metric 2021-03-22 13:38:51.867 [info] <0.269.0> Feature flags: [x] empty_basic_get_metric 2021-03-22 13:38:51.867 [info] <0.269.0> Feature flags: [x] implicit_default_bindings 2021-03-22 13:38:51.867 [info] <0.269.0> Feature flags: [x] maintenance_mode_status 2021-03-22 13:38:51.867 [info] <0.269.0> Feature flags: [x] quorum_queue 2021-03-22 13:38:51.868 [info] <0.269.0> Feature flags: [x] user_limits 2021-03-22 13:38:51.868 [info] <0.269.0> Feature flags: [x] virtual_host_metadata 2021-03-22 13:38:51.868 [info] <0.269.0> Feature flags: feature flag states written to disk: yes 2021-03-22 13:38:51.987 [info] <0.44.0> Application mnesia started on node 'rabbit@rabbitmq-2.rabbitmq-headless.sai.svc.cluster.local' 2021-03-22 13:38:51.988 [info] <0.269.0> Starting RabbitMQ 3.8.14 on Erlang 22.3 Copyright (c) 2007-2021 VMware, Inc. or its affiliates. Licensed under the MPL 2.0. Website: https://rabbitmq.com ## ## RabbitMQ 3.8.14 ## ## ########## Copyright (c) 2007-2021 VMware, Inc. or its affiliates. ###### ## ########## Licensed under the MPL 2.0. Website: https://rabbitmq.com Doc guides: https://rabbitmq.com/documentation.html Support: https://rabbitmq.com/contact.html Tutorials: https://rabbitmq.com/getstarted.html Monitoring: https://rabbitmq.com/monitoring.html Logs: Config file(s): /opt/bitnami/rabbitmq/etc/rabbitmq/rabbitmq.conf Starting broker...2021-03-22 13:38:51.989 [info] <0.269.0> node : rabbit@rabbitmq-2.rabbitmq-headless.sai.svc.cluster.local home dir : /opt/bitnami/rabbitmq/.rabbitmq config file(s) : /opt/bitnami/rabbitmq/etc/rabbitmq/rabbitmq.conf cookie hash : RxahyJ80werAKhDjXt6lvA== log(s) : database dir : /bitnami/rabbitmq/mnesia/rabbit@rabbitmq-2.rabbitmq-headless.sai.svc.cluster.local 2021-03-22 13:38:54.018 [info] <0.269.0> Feature flag `drop_unroutable_metric`: mark as enabled=true 2021-03-22 13:38:54.029 [info] <0.269.0> Feature flag `empty_basic_get_metric`: mark as enabled=true 2021-03-22 13:38:54.289 [info] <0.269.0> Running boot step pre_boot defined by app rabbit 2021-03-22 13:38:54.289 [info] <0.269.0> Running boot step rabbit_core_metrics defined by app rabbit 2021-03-22 13:38:54.290 [info] <0.269.0> Running boot step rabbit_alarm defined by app rabbit 2021-03-22 13:38:54.294 [info] <0.499.0> Memory high watermark set to 409 MiB (429496729 bytes) of 1024 MiB (1073741824 bytes) total 2021-03-22 13:38:54.300 [info] <0.501.0> Enabling free disk space monitoring 2021-03-22 13:38:54.300 [info] <0.501.0> Disk free limit set to 50MB 2021-03-22 13:38:54.305 [info] <0.269.0> Running boot step code_server_cache defined by app rabbit 2021-03-22 13:38:54.305 [info] <0.269.0> Running boot step file_handle_cache defined by app rabbit 2021-03-22 13:38:54.305 [info] <0.504.0> Limiting to approx 1048479 file handles (943629 sockets) 2021-03-22 13:38:54.305 [info] <0.505.0> FHC read buffering: OFF 2021-03-22 13:38:54.305 [info] <0.505.0> FHC write buffering: ON 2021-03-22 13:38:54.306 [info] <0.269.0> Running boot step worker_pool defined by app rabbit 2021-03-22 13:38:54.306 [info] <0.406.0> Will use 8 processes for default worker pool 2021-03-22 13:38:54.306 [info] <0.406.0> Starting worker pool 'worker_pool' with 8 processes in it 2021-03-22 13:38:54.307 [info] <0.269.0> Running boot step database defined by app rabbit 2021-03-22 13:38:54.311 [info] <0.269.0> Waiting for Mnesia tables for 30000 ms, 9 retries left 2021-03-22 13:38:54.313 [info] <0.269.0> Successfully synced tables from a peer 2021-03-22 13:38:54.314 [info] <0.269.0> Waiting for Mnesia tables for 30000 ms, 9 retries left 2021-03-22 13:38:54.314 [info] <0.269.0> Successfully synced tables from a peer 2021-03-22 13:38:54.314 [warning] <0.269.0> Feature flags: the previous instance of this node must have failed to write the `feature_flags` file at `/bitnami/rabbitmq/mnesia/rabbit@rabbitmq-2.rabbitmq-headless.sai.svc.cluster.local-feature_flags`: 2021-03-22 13:38:54.314 [warning] <0.269.0> Feature flags: - list of previously disabled feature flags now marked as such: [empty_basic_get_metric] 2021-03-22 13:38:54.326 [info] <0.269.0> Waiting for Mnesia tables for 30000 ms, 9 retries left 2021-03-22 13:38:54.326 [info] <0.269.0> Successfully synced tables from a peer 2021-03-22 13:38:54.361 [info] <0.269.0> Waiting for Mnesia tables for 30000 ms, 9 retries left 2021-03-22 13:38:54.361 [info] <0.269.0> Successfully synced tables from a peer 2021-03-22 13:38:54.361 [info] <0.269.0> Will register with peer discovery backend rabbit_peer_discovery_k8s 2021-03-22 13:38:54.392 [info] <0.269.0> Running boot step database_sync defined by app rabbit 2021-03-22 13:38:54.392 [info] <0.269.0> Running boot step feature_flags defined by app rabbit 2021-03-22 13:38:54.392 [info] <0.269.0> Running boot step codec_correctness_check defined by app rabbit 2021-03-22 13:38:54.392 [info] <0.269.0> Running boot step external_infrastructure defined by app rabbit 2021-03-22 13:38:54.392 [info] <0.269.0> Running boot step rabbit_registry defined by app rabbit 2021-03-22 13:38:54.392 [info] <0.269.0> Running boot step rabbit_auth_mechanism_cr_demo defined by app rabbit 2021-03-22 13:38:54.393 [info] <0.269.0> Running boot step rabbit_queue_location_random defined by app rabbit 2021-03-22 13:38:54.393 [info] <0.269.0> Running boot step rabbit_event defined by app rabbit 2021-03-22 13:38:54.393 [info] <0.269.0> Running boot step rabbit_auth_mechanism_amqplain defined by app rabbit 2021-03-22 13:38:54.393 [info] <0.269.0> Running boot step rabbit_auth_mechanism_plain defined by app rabbit 2021-03-22 13:38:54.393 [info] <0.269.0> Running boot step rabbit_exchange_type_direct defined by app rabbit 2021-03-22 13:38:54.393 [info] <0.269.0> Running boot step rabbit_exchange_type_fanout defined by app rabbit 2021-03-22 13:38:54.393 [info] <0.269.0> Running boot step rabbit_exchange_type_headers defined by app rabbit 2021-03-22 13:38:54.394 [info] <0.269.0> Running boot step rabbit_exchange_type_topic defined by app rabbit 2021-03-22 13:38:54.394 [info] <0.269.0> Running boot step rabbit_mirror_queue_mode_all defined by app rabbit 2021-03-22 13:38:54.394 [info] <0.269.0> Running boot step rabbit_mirror_queue_mode_exactly defined by app rabbit 2021-03-22 13:38:54.394 [info] <0.269.0> Running boot step rabbit_mirror_queue_mode_nodes defined by app rabbit 2021-03-22 13:38:54.394 [info] <0.269.0> Running boot step rabbit_priority_queue defined by app rabbit 2021-03-22 13:38:54.394 [info] <0.269.0> Priority queues enabled, real BQ is rabbit_variable_queue 2021-03-22 13:38:54.394 [info] <0.269.0> Running boot step rabbit_queue_location_client_local defined by app rabbit 2021-03-22 13:38:54.394 [info] <0.269.0> Running boot step rabbit_queue_location_min_masters defined by app rabbit 2021-03-22 13:38:54.394 [info] <0.269.0> Running boot step kernel_ready defined by app rabbit 2021-03-22 13:38:54.394 [info] <0.269.0> Running boot step ldap_pool defined by app rabbitmq_auth_backend_ldap 2021-03-22 13:38:54.395 [info] <0.406.0> Starting worker pool 'ldap_pool' with 64 processes in it 2021-03-22 13:38:54.400 [info] <0.269.0> Running boot step rabbit_sysmon_minder defined by app rabbit 2021-03-22 13:38:54.400 [info] <0.269.0> Running boot step rabbit_epmd_monitor defined by app rabbit 2021-03-22 13:38:54.401 [info] <0.604.0> epmd monitor knows us, inter-node communication (distribution) port: 25672 2021-03-22 13:38:54.401 [info] <0.269.0> Running boot step guid_generator defined by app rabbit 2021-03-22 13:38:54.404 [info] <0.269.0> Running boot step rabbit_node_monitor defined by app rabbit 2021-03-22 13:38:54.405 [info] <0.608.0> Starting rabbit_node_monitor 2021-03-22 13:38:54.406 [info] <0.269.0> Running boot step delegate_sup defined by app rabbit 2021-03-22 13:38:54.407 [info] <0.269.0> Running boot step rabbit_memory_monitor defined by app rabbit 2021-03-22 13:38:54.408 [info] <0.269.0> Running boot step core_initialized defined by app rabbit 2021-03-22 13:38:54.408 [info] <0.269.0> Running boot step upgrade_queues defined by app rabbit 2021-03-22 13:38:54.434 [info] <0.269.0> Running boot step channel_tracking defined by app rabbit 2021-03-22 13:38:54.436 [info] <0.269.0> Setting up a table for channel tracking on this node: 'tracked_channel_on_node_rabbit@rabbitmq-2.rabbitmq-headless.sai.svc.cluster.local' 2021-03-22 13:38:54.438 [info] <0.269.0> Setting up a table for channel tracking on this node: 'tracked_channel_table_per_user_on_node_rabbit@rabbitmq-2.rabbitmq-headless.sai.svc.cluster.local' 2021-03-22 13:38:54.438 [info] <0.269.0> Running boot step rabbit_channel_tracking_handler defined by app rabbit 2021-03-22 13:38:54.438 [info] <0.269.0> Running boot step connection_tracking defined by app rabbit 2021-03-22 13:38:54.440 [info] <0.269.0> Setting up a table for connection tracking on this node: 'tracked_connection_on_node_rabbit@rabbitmq-2.rabbitmq-headless.sai.svc.cluster.local' 2021-03-22 13:38:54.441 [info] <0.269.0> Setting up a table for per-vhost connection counting on this node: 'tracked_connection_per_vhost_on_node_rabbit@rabbitmq-2.rabbitmq-headless.sai.svc.cluster.local' 2021-03-22 13:38:54.443 [info] <0.269.0> Setting up a table for per-user connection counting on this node: 'tracked_connection_table_per_user_on_node_rabbit@rabbitmq-2.rabbitmq-headless.sai.svc.cluster.local' 2021-03-22 13:38:54.443 [info] <0.269.0> Running boot step rabbit_connection_tracking_handler defined by app rabbit 2021-03-22 13:38:54.443 [info] <0.269.0> Running boot step rabbit_exchange_parameters defined by app rabbit 2021-03-22 13:38:54.443 [info] <0.269.0> Running boot step rabbit_mirror_queue_misc defined by app rabbit 2021-03-22 13:38:54.444 [info] <0.269.0> Running boot step rabbit_policies defined by app rabbit 2021-03-22 13:38:54.445 [info] <0.269.0> Running boot step rabbit_policy defined by app rabbit 2021-03-22 13:38:54.445 [info] <0.269.0> Running boot step rabbit_queue_location_validator defined by app rabbit 2021-03-22 13:38:54.445 [info] <0.269.0> Running boot step rabbit_quorum_memory_manager defined by app rabbit 2021-03-22 13:38:54.445 [info] <0.269.0> Running boot step rabbit_vhost_limit defined by app rabbit 2021-03-22 13:38:54.445 [info] <0.269.0> Running boot step rabbit_mgmt_reset_handler defined by app rabbitmq_management 2021-03-22 13:38:54.445 [info] <0.269.0> Running boot step rabbit_mgmt_db_handler defined by app rabbitmq_management_agent 2021-03-22 13:38:54.445 [info] <0.269.0> Management plugin: using rates mode 'basic' 2021-03-22 13:38:54.448 [info] <0.269.0> Running boot step recovery defined by app rabbit 2021-03-22 13:38:54.469 [info] <0.653.0> Making sure data directory '/bitnami/rabbitmq/mnesia/rabbit@rabbitmq-2.rabbitmq-headless.sai.svc.cluster.local/msg_stores/vhosts/628WB79CIFDYO9LJI6DKMI09L' for vhost '/' exists 2021-03-22 13:38:54.473 [info] <0.653.0> Starting message stores for vhost '/' 2021-03-22 13:38:54.473 [info] <0.657.0> Message store "628WB79CIFDYO9LJI6DKMI09L/msg_store_transient": using rabbit_msg_store_ets_index to provide index 2021-03-22 13:38:54.475 [info] <0.653.0> Started message store of type transient for vhost '/' 2021-03-22 13:38:54.476 [info] <0.661.0> Message store "628WB79CIFDYO9LJI6DKMI09L/msg_store_persistent": using rabbit_msg_store_ets_index to provide index 2021-03-22 13:38:54.477 [warning] <0.661.0> Message store "628WB79CIFDYO9LJI6DKMI09L/msg_store_persistent" : recovery terms differ from present rebuilding indices from scratch 2021-03-22 13:38:54.478 [info] <0.653.0> Started message store of type persistent for vhost '/' 2021-03-22 13:38:54.481 [info] <0.653.0> Mirrored queue 'integrations.qa' in vhost '/': Adding mirror on node 'rabbit@rabbitmq-2.rabbitmq-headless.sai.svc.cluster.local': <0.683.0> 2021-03-22 13:38:54.481 [info] <0.653.0> Mirrored queue 'promotion-listen-rules.qa' in vhost '/': Adding mirror on node 'rabbit@rabbitmq-2.rabbitmq-headless.sai.svc.cluster.local': <0.687.0> 2021-03-22 13:38:54.481 [info] <0.653.0> Mirrored queue 'promotion-service.dev' in vhost '/': Adding mirror on node 'rabbit@rabbitmq-2.rabbitmq-headless.sai.svc.cluster.local': <0.691.0> 2021-03-22 13:38:54.482 [info] <0.653.0> Mirrored queue 'backoffice-services.qa' in vhost '/': Adding mirror on node 'rabbit@rabbitmq-2.rabbitmq-headless.sai.svc.cluster.local': <0.695.0> 2021-03-22 13:38:54.482 [info] <0.653.0> Mirrored queue 'integrations.dev' in vhost '/': Adding mirror on node 'rabbit@rabbitmq-2.rabbitmq-headless.sai.svc.cluster.local': <0.699.0> 2021-03-22 13:38:54.483 [info] <0.653.0> Mirrored queue 'promotion-listen-rules.dev' in vhost '/': Adding mirror on node 'rabbit@rabbitmq-2.rabbitmq-headless.sai.svc.cluster.local': <0.703.0> 2021-03-22 13:38:54.483 [info] <0.653.0> Mirrored queue 'backoffice-services.dev' in vhost '/': Adding mirror on node 'rabbit@rabbitmq-2.rabbitmq-headless.sai.svc.cluster.local': <0.707.0> 2021-03-22 13:38:54.483 [info] <0.653.0> Mirrored queue 'promotion-service-rpc.qa' in vhost '/': Adding mirror on node 'rabbit@rabbitmq-2.rabbitmq-headless.sai.svc.cluster.local': <0.711.0> 2021-03-22 13:38:54.484 [info] <0.653.0> Mirrored queue 'promotion-service-rpc.dev' in vhost '/': Adding mirror on node 'rabbit@rabbitmq-2.rabbitmq-headless.sai.svc.cluster.local': <0.715.0> 2021-03-22 13:38:54.484 [info] <0.653.0> Mirrored queue 'promotion-service.qa' in vhost '/': Adding mirror on node 'rabbit@rabbitmq-2.rabbitmq-headless.sai.svc.cluster.local': <0.719.0> 2021-03-22 13:38:54.485 [info] <0.269.0> Running boot step empty_db_check defined by app rabbit 2021-03-22 13:38:54.485 [info] <0.269.0> Will not seed default virtual host and user: have definitions to load... 2021-03-22 13:38:54.485 [info] <0.269.0> Running boot step rabbit_looking_glass defined by app rabbit 2021-03-22 13:38:54.485 [info] <0.269.0> Running boot step rabbit_core_metrics_gc defined by app rabbit 2021-03-22 13:38:54.485 [info] <0.269.0> Running boot step background_gc defined by app rabbit 2021-03-22 13:38:54.485 [info] <0.269.0> Running boot step routing_ready defined by app rabbit 2021-03-22 13:38:54.485 [info] <0.269.0> Running boot step pre_flight defined by app rabbit 2021-03-22 13:38:54.485 [info] <0.269.0> Running boot step notify_cluster defined by app rabbit 2021-03-22 13:38:54.485 [info] <0.269.0> Running boot step networking defined by app rabbit 2021-03-22 13:38:54.486 [info] <0.269.0> Running boot step definition_import_worker_pool defined by app rabbit 2021-03-22 13:38:54.486 [info] <0.406.0> Starting worker pool 'definition_import_pool' with 8 processes in it 2021-03-22 13:38:54.486 [info] <0.608.0> rabbit on node 'rabbit@rabbitmq-0.rabbitmq-headless.sai.svc.cluster.local' up 2021-03-22 13:38:54.487 [info] <0.269.0> Running boot step cluster_name defined by app rabbit 2021-03-22 13:38:54.487 [info] <0.269.0> Running boot step direct_client defined by app rabbit 2021-03-22 13:38:54.487 [info] <0.269.0> Running boot step rabbit_management_load_definitions defined by app rabbitmq_management 2021-03-22 13:38:54.487 [info] <0.742.0> Resetting node maintenance status 2021-03-22 13:38:54.487 [info] <0.44.0> Application rabbit started on node 'rabbit@rabbitmq-2.rabbitmq-headless.sai.svc.cluster.local' 2021-03-22 13:38:54.531 [info] <0.608.0> rabbit on node 'rabbit@rabbitmq-1.rabbitmq-headless.sai.svc.cluster.local' up 2021-03-22 13:38:54.546 [info] <0.44.0> Application rabbitmq_management_agent started on node 'rabbit@rabbitmq-2.rabbitmq-headless.sai.svc.cluster.local' 2021-03-22 13:38:54.546 [info] <0.44.0> Application cowlib started on node 'rabbit@rabbitmq-2.rabbitmq-headless.sai.svc.cluster.local' 2021-03-22 13:38:54.546 [info] <0.44.0> Application cowboy started on node 'rabbit@rabbitmq-2.rabbitmq-headless.sai.svc.cluster.local' 2021-03-22 13:38:54.547 [info] <0.44.0> Application rabbitmq_web_dispatch started on node 'rabbit@rabbitmq-2.rabbitmq-headless.sai.svc.cluster.local' 2021-03-22 13:38:54.547 [info] <0.44.0> Application amqp_client started on node 'rabbit@rabbitmq-2.rabbitmq-headless.sai.svc.cluster.local' 2021-03-22 13:38:54.591 [info] <0.825.0> Management plugin: HTTP (non-TLS) listener started on port 15672 2021-03-22 13:38:54.591 [info] <0.935.0> Statistics database started. 2021-03-22 13:38:54.591 [info] <0.934.0> Starting worker pool 'management_worker_pool' with 3 processes in it 2021-03-22 13:38:54.592 [info] <0.44.0> Application rabbitmq_management started on node 'rabbit@rabbitmq-2.rabbitmq-headless.sai.svc.cluster.local' 2021-03-22 13:38:54.618 [info] <0.44.0> Application prometheus started on node 'rabbit@rabbitmq-2.rabbitmq-headless.sai.svc.cluster.local' 2021-03-22 13:38:54.618 [info] <0.44.0> Application eldap started on node 'rabbit@rabbitmq-2.rabbitmq-headless.sai.svc.cluster.local' 2021-03-22 13:38:54.618 [warning] <0.955.0> LDAP plugin loaded, but rabbit_auth_backend_ldap is not in the list of auth_backends. LDAP auth will not work. 2021-03-22 13:38:54.618 [info] <0.44.0> Application rabbitmq_auth_backend_ldap started on node 'rabbit@rabbitmq-2.rabbitmq-headless.sai.svc.cluster.local' 2021-03-22 13:38:54.619 [info] <0.961.0> Peer discovery: enabling node cleanup (will only log warnings). Check interval: 10 seconds. 2021-03-22 13:38:54.619 [info] <0.44.0> Application rabbitmq_peer_discovery_common started on node 'rabbit@rabbitmq-2.rabbitmq-headless.sai.svc.cluster.local' 2021-03-22 13:38:54.619 [info] <0.44.0> Application rabbitmq_peer_discovery_k8s started on node 'rabbit@rabbitmq-2.rabbitmq-headless.sai.svc.cluster.local' 2021-03-22 13:38:54.624 [info] <0.969.0> Prometheus metrics: HTTP (non-TLS) listener started on port 9419 2021-03-22 13:38:54.624 [info] <0.44.0> Application rabbitmq_prometheus started on node 'rabbit@rabbitmq-2.rabbitmq-headless.sai.svc.cluster.local' 2021-03-22 13:38:54.624 [info] <0.742.0> Applying definitions from file at '/app/load_definition.json' 2021-03-22 13:38:54.624 [info] <0.742.0> Asked to import definitions. Acting user: rmq-internal 2021-03-22 13:38:54.624 [info] <0.742.0> Importing concurrently 1 users... 2021-03-22 13:38:54.630 [info] <0.729.0> Successfully changed password for user 'user' 2021-03-22 13:38:54.635 [info] <0.729.0> Successfully set user tags for user 'user' to [administrator] 2021-03-22 13:38:54.635 [info] <0.742.0> Importing concurrently 1 vhosts... 2021-03-22 13:38:54.636 [info] <0.742.0> Importing sequentially 1 policies... 2021-03-22 13:38:54.641 [info] <0.742.0> Ready to start client connection listeners 2021-03-22 13:38:54.643 [info] <0.1091.0> started TCP listener on [::]:5672 completed with 7 plugins. 2021-03-22 13:38:55.508 [info] <0.742.0> Server startup complete; 7 plugins started. * rabbitmq_prometheus * rabbitmq_peer_discovery_k8s * rabbitmq_peer_discovery_common * rabbitmq_auth_backend_ldap * rabbitmq_management * rabbitmq_web_dispatch * rabbitmq_management_agent 2021-03-22 13:38:55.509 [info] <0.742.0> Resetting node maintenance status 2021-03-22 13:39:29.326 [info] <0.608.0> rabbit on node 'rabbit@rabbitmq-1.rabbitmq-headless.sai.svc.cluster.local' down 2021-03-22 13:39:29.332 [info] <0.608.0> Keeping rabbit@rabbitmq-1.rabbitmq-headless.sai.svc.cluster.local listeners: the node is already back 2021-03-22 13:39:29.416 [info] <0.699.0> Mirrored queue 'integrations.dev' in vhost '/': Secondary replica of queue detected replica to be down 2021-03-22 13:39:29.423 [info] <0.608.0> node 'rabbit@rabbitmq-1.rabbitmq-headless.sai.svc.cluster.local' down: connection_closed 2021-03-22 13:39:29.425 [info] <0.715.0> Mirrored queue 'promotion-service-rpc.dev' in vhost '/': Secondary replica of queue detected replica to be down 2021-03-22 13:39:29.425 [info] <0.695.0> Mirrored queue 'backoffice-services.qa' in vhost '/': Secondary replica of queue detected replica to be down 2021-03-22 13:39:29.425 [info] <0.719.0> Mirrored queue 'promotion-service.qa' in vhost '/': Secondary replica of queue detected replica to be down 2021-03-22 13:39:29.425 [info] <0.711.0> Mirrored queue 'promotion-service-rpc.qa' in vhost '/': Secondary replica of queue detected replica to be down 2021-03-22 13:39:29.425 [info] <0.683.0> Mirrored queue 'integrations.qa' in vhost '/': Secondary replica of queue detected replica to be down 2021-03-22 13:39:29.428 [info] <0.703.0> Mirrored queue 'promotion-listen-rules.dev' in vhost '/': Secondary replica of queue detected replica to be down 2021-03-22 13:39:29.435 [info] <0.691.0> Mirrored queue 'promotion-service.dev' in vhost '/': Secondary replica of queue detected replica to be down 2021-03-22 13:39:34.627 [warning] <0.961.0> Peer discovery: node rabbit@rabbitmq-1.rabbitmq-headless.sai.svc.cluster.local is unreachable 2021-03-22 13:39:44.635 [info] <0.961.0> k8s endpoint listing returned nodes not yet ready: rabbitmq-1 2021-03-22 13:39:44.635 [warning] <0.961.0> Peer discovery: node rabbit@rabbitmq-1.rabbitmq-headless.sai.svc.cluster.local is unreachable 2021-03-22 13:39:49.531 [info] <0.608.0> node 'rabbit@rabbitmq-1.rabbitmq-headless.sai.svc.cluster.local' up 2021-03-22 13:39:49.963 [info] <0.608.0> rabbit on node 'rabbit@rabbitmq-1.rabbitmq-headless.sai.svc.cluster.local' up 2021-03-22 13:39:52.577 [info] <0.608.0> rabbit on node 'rabbit@rabbitmq-1.rabbitmq-headless.sai.svc.cluster.local' down 2021-03-22 13:39:52.582 [info] <0.608.0> Keeping rabbit@rabbitmq-1.rabbitmq-headless.sai.svc.cluster.local listeners: the node is already back 2021-03-22 13:39:52.671 [info] <0.608.0> node 'rabbit@rabbitmq-1.rabbitmq-headless.sai.svc.cluster.local' down: connection_closed 2021-03-22 13:39:52.690 [info] <0.687.0> Mirrored queue 'promotion-listen-rules.qa' in vhost '/': Secondary replica of queue detected replica to be down 2021-03-22 13:39:52.694 [info] <0.715.0> Mirrored queue 'promotion-service-rpc.dev' in vhost '/': Secondary replica of queue detected replica to be down 2021-03-22 13:39:52.694 [info] <0.683.0> Mirrored queue 'integrations.qa' in vhost '/': Secondary replica of queue detected replica to be down 2021-03-22 13:39:52.702 [info] <0.703.0> Mirrored queue 'promotion-listen-rules.dev' in vhost '/': Secondary replica of queue detected replica to be down 2021-03-22 13:39:52.726 [info] <0.699.0> Mirrored queue 'integrations.dev' in vhost '/': Secondary replica of queue detected replica to be down 2021-03-22 13:39:54.634 [info] <0.961.0> k8s endpoint listing returned nodes not yet ready: rabbitmq-1 2021-03-22 13:39:54.634 [warning] <0.961.0> Peer discovery: node rabbit@rabbitmq-1.rabbitmq-headless.sai.svc.cluster.local is unreachable 2021-03-22 13:40:04.638 [info] <0.961.0> k8s endpoint listing returned nodes not yet ready: rabbitmq-1 2021-03-22 13:40:04.638 [warning] <0.961.0> Peer discovery: node rabbit@rabbitmq-1.rabbitmq-headless.sai.svc.cluster.local is unreachable 2021-03-22 13:40:07.865 [info] <0.608.0> node 'rabbit@rabbitmq-1.rabbitmq-headless.sai.svc.cluster.local' up 2021-03-22 13:40:08.079 [info] <0.1408.0> accepting AMQP connection <0.1408.0> (10.3.144.23:47846 -> 10.3.180.8:5672) 2021-03-22 13:40:08.826 [info] <0.1408.0> connection <0.1408.0> (10.3.144.23:47846 -> 10.3.180.8:5672): user 'user' authenticated and granted access to vhost '/' 2021-03-22 13:40:09.132 [info] <0.608.0> rabbit on node 'rabbit@rabbitmq-1.rabbitmq-headless.sai.svc.cluster.local' up 2021-03-22 13:40:15.092 [warning] <0.1408.0> closing AMQP connection <0.1408.0> (10.3.144.23:47846 -> 10.3.180.8:5672, vhost: '/', user: 'user'): client unexpectedly closed TCP connection 2021-03-22 13:40:15.094 [info] <0.1463.0> Closing all channels from connection '10.3.144.23:47846 -> 10.3.180.8:5672' because it has been closed 2021-03-22 13:40:37.670 [info] <0.608.0> rabbit on node 'rabbit@rabbitmq-0.rabbitmq-headless.sai.svc.cluster.local' down 2021-03-22 13:40:37.675 [info] <0.608.0> Keeping rabbit@rabbitmq-0.rabbitmq-headless.sai.svc.cluster.local listeners: the node is already back 2021-03-22 13:40:37.763 [info] <0.699.0> Mirrored queue 'integrations.dev' in vhost '/': Promoting mirror to master 2021-03-22 13:40:37.770 [info] <0.703.0> Mirrored queue 'promotion-listen-rules.dev' in vhost '/': Secondary replica of queue detected replica to be down 2021-03-22 13:40:37.770 [info] <0.695.0> Mirrored queue 'backoffice-services.qa' in vhost '/': Secondary replica of queue detected replica to be down 2021-03-22 13:40:37.771 [info] <0.703.0> Mirrored queue 'promotion-listen-rules.dev' in vhost '/': Promoting mirror to master 2021-03-22 13:40:37.771 [info] <0.695.0> Mirrored queue 'backoffice-services.qa' in vhost '/': Promoting mirror to master 2021-03-22 13:40:37.773 [info] <0.719.0> Mirrored queue 'promotion-service.qa' in vhost '/': Promoting mirror to master 2021-03-22 13:40:37.775 [info] <0.687.0> Mirrored queue 'promotion-listen-rules.qa' in vhost '/': Promoting mirror to master 2021-03-22 13:40:37.777 [info] <0.715.0> Mirrored queue 'promotion-service-rpc.dev' in vhost '/': Promoting mirror to master 2021-03-22 13:40:37.777 [info] <0.691.0> Mirrored queue 'promotion-service.dev' in vhost '/': Promoting mirror to master 2021-03-22 13:40:37.777 [info] <0.711.0> Mirrored queue 'promotion-service-rpc.qa' in vhost '/': Secondary replica of queue detected replica to be down 2021-03-22 13:40:37.777 [info] <0.711.0> Mirrored queue 'promotion-service-rpc.qa' in vhost '/': Promoting mirror to master 2021-03-22 13:40:37.781 [info] <0.707.0> Mirrored queue 'backoffice-services.dev' in vhost '/': Promoting mirror to master 2021-03-22 13:40:37.784 [info] <0.683.0> Mirrored queue 'integrations.qa' in vhost '/': Promoting mirror to master 2021-03-22 13:40:37.786 [info] <0.608.0> node 'rabbit@rabbitmq-0.rabbitmq-headless.sai.svc.cluster.local' down: connection_closed 2021-03-22 13:40:42.563 [info] <0.1581.0> accepting AMQP connection <0.1581.0> (10.3.202.25:40450 -> 10.3.180.8:5672) 2021-03-22 13:40:42.610 [info] <0.1581.0> connection <0.1581.0> (10.3.202.25:40450 -> 10.3.180.8:5672): user 'user' authenticated and granted access to vhost '/' 2021-03-22 13:40:44.627 [warning] <0.961.0> Peer discovery: node rabbit@rabbitmq-0.rabbitmq-headless.sai.svc.cluster.local is unreachable 2021-03-22 13:40:54.627 [warning] <0.961.0> Peer discovery: node rabbit@rabbitmq-0.rabbitmq-headless.sai.svc.cluster.local is unreachable 2021-03-22 13:41:04.635 [info] <0.961.0> k8s endpoint listing returned nodes not yet ready: rabbitmq-0 2021-03-22 13:41:04.635 [warning] <0.961.0> Peer discovery: node rabbit@rabbitmq-0.rabbitmq-headless.sai.svc.cluster.local is unreachable 2021-03-22 13:41:05.178 [info] <0.608.0> node 'rabbit@rabbitmq-0.rabbitmq-headless.sai.svc.cluster.local' up 2021-03-22 13:41:05.578 [info] <0.608.0> rabbit on node 'rabbit@rabbitmq-0.rabbitmq-headless.sai.svc.cluster.local' up 2021-03-22 13:41:08.613 [info] <0.608.0> rabbit on node 'rabbit@rabbitmq-0.rabbitmq-headless.sai.svc.cluster.local' down 2021-03-22 13:41:08.618 [info] <0.608.0> Keeping rabbit@rabbitmq-0.rabbitmq-headless.sai.svc.cluster.local listeners: the node is already back 2021-03-22 13:41:08.708 [info] <0.1565.0> Mirrored queue 'promotion-listen-rules.qa' in vhost '/': Primary replica of queue detected replica to be down 2021-03-22 13:41:08.710 [info] <0.1572.0> Mirrored queue 'integrations.qa' in vhost '/': Primary replica of queue detected replica to be down 2021-03-22 13:41:08.714 [info] <0.1549.0> Mirrored queue 'integrations.dev' in vhost '/': Primary replica of queue detected replica to be down 2021-03-22 13:41:08.717 [info] <0.608.0> node 'rabbit@rabbitmq-0.rabbitmq-headless.sai.svc.cluster.local' down: connection_closed 2021-03-22 13:41:08.721 [info] <0.1564.0> Mirrored queue 'promotion-service.qa' in vhost '/': Primary replica of queue detected replica to be down 2021-03-22 13:41:08.723 [info] <0.1571.0> Mirrored queue 'backoffice-services.dev' in vhost '/': Primary replica of queue detected replica to be down 2021-03-22 13:41:08.725 [info] <0.1567.0> Mirrored queue 'promotion-service.dev' in vhost '/': Primary replica of queue detected replica to be down 2021-03-22 13:41:14.628 [info] <0.961.0> k8s endpoint listing returned nodes not yet ready: rabbitmq-0 2021-03-22 13:41:14.628 [warning] <0.961.0> Peer discovery: node rabbit@rabbitmq-0.rabbitmq-headless.sai.svc.cluster.local is unreachable 2021-03-22 13:41:16.560 [info] <0.1760.0> accepting AMQP connection <0.1760.0> (10.3.208.5:58822 -> 10.3.180.8:5672) 2021-03-22 13:41:16.562 [info] <0.1760.0> connection <0.1760.0> (10.3.208.5:58822 -> 10.3.180.8:5672): user 'user' authenticated and granted access to vhost '/' 2021-03-22 13:41:22.935 [info] <0.608.0> node 'rabbit@rabbitmq-0.rabbitmq-headless.sai.svc.cluster.local' up 2021-03-22 13:41:24.643 [info] <0.608.0> rabbit on node 'rabbit@rabbitmq-0.rabbitmq-headless.sai.svc.cluster.local' up ```

It look well after install, so I'll kill pods and wait for start.

kubectl delete po -l app.kubernetes.io/instance=rabbitmq -n <namespace> --force --grace-period=0
Log after update load definitions ```shell rabbitmq 13:47:56.14 rabbitmq 13:47:56.14 Welcome to the Bitnami rabbitmq container rabbitmq 13:47:56.14 Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-rabbitmq rabbitmq 13:47:56.14 Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-rabbitmq/issues rabbitmq 13:47:56.14 rabbitmq 13:47:56.14 INFO ==> ** Starting RabbitMQ setup ** rabbitmq 13:47:56.16 INFO ==> Validating settings in RABBITMQ_* env vars.. rabbitmq 13:47:56.18 INFO ==> Initializing RabbitMQ... rabbitmq 13:47:56.18 DEBUG ==> Creating environment file... rabbitmq 13:47:56.18 DEBUG ==> Creating enabled_plugins file... rabbitmq 13:47:56.19 DEBUG ==> Creating Erlang cookie... rabbitmq 13:47:56.19 DEBUG ==> Ensuring expected directories/files exist... rabbitmq 13:47:56.21 INFO ==> Starting RabbitMQ in background... Waiting for erlang distribution on node 'rabbit@rabbitmq-0.rabbitmq-headless.sai.svc.cluster.local' while OS process '45' is running Configuring logger redirection Waiting for applications 'rabbit_and_plugins' to start on node 'rabbit@rabbitmq-0.rabbitmq-headless.sai.svc.cluster.local' 2021-03-22 13:48:01.199 [debug] <0.298.0> Lager installed handler error_logger_lager_h into error_logger 2021-03-22 13:48:01.206 [debug] <0.301.0> Lager installed handler lager_forwarder_backend into error_logger_lager_event 2021-03-22 13:48:01.206 [debug] <0.304.0> Lager installed handler lager_forwarder_backend into rabbit_log_lager_event 2021-03-22 13:48:01.206 [debug] <0.307.0> Lager installed handler lager_forwarder_backend into rabbit_log_channel_lager_event 2021-03-22 13:48:01.206 [debug] <0.310.0> Lager installed handler lager_forwarder_backend into rabbit_log_connection_lager_event 2021-03-22 13:48:01.206 [debug] <0.313.0> Lager installed handler lager_forwarder_backend into rabbit_log_feature_flags_lager_event 2021-03-22 13:48:01.208 [debug] <0.316.0> Lager installed handler lager_forwarder_backend into rabbit_log_federation_lager_event 2021-03-22 13:48:01.209 [debug] <0.319.0> Lager installed handler lager_forwarder_backend into rabbit_log_ldap_lager_event 2021-03-22 13:48:01.210 [debug] <0.322.0> Lager installed handler lager_forwarder_backend into rabbit_log_mirroring_lager_event 2021-03-22 13:48:01.211 [debug] <0.325.0> Lager installed handler lager_forwarder_backend into rabbit_log_prelaunch_lager_event 2021-03-22 13:48:01.212 [debug] <0.328.0> Lager installed handler lager_forwarder_backend into rabbit_log_queue_lager_event 2021-03-22 13:48:01.214 [debug] <0.331.0> Lager installed handler lager_forwarder_backend into rabbit_log_ra_lager_event 2021-03-22 13:48:01.217 [debug] <0.334.0> Lager installed handler lager_forwarder_backend into rabbit_log_shovel_lager_event 2021-03-22 13:48:01.218 [debug] <0.337.0> Lager installed handler lager_forwarder_backend into rabbit_log_upgrade_lager_event 2021-03-22 13:48:01.307 [info] <0.44.0> Application lager started on node 'rabbit@rabbitmq-0.rabbitmq-headless.sai.svc.cluster.local' 2021-03-22 13:48:01.575 [info] <0.44.0> Application mnesia started on node 'rabbit@rabbitmq-0.rabbitmq-headless.sai.svc.cluster.local' 2021-03-22 13:48:01.576 [info] <0.269.0> Starting RabbitMQ 3.8.14 on Erlang 22.3 Copyright (c) 2007-2021 VMware, Inc. or its affiliates. Licensed under the MPL 2.0. Website: https://rabbitmq.com ## ## RabbitMQ 3.8.14 ## ## ########## Copyright (c) 2007-2021 VMware, Inc. or its affiliates. ###### ## ########## Licensed under the MPL 2.0. Website: https://rabbitmq.com Doc guides: https://rabbitmq.com/documentation.html Support: https://rabbitmq.com/contact.html Tutorials: https://rabbitmq.com/getstarted.html Monitoring: https://rabbitmq.com/monitoring.html Logs: Config file(s): /opt/bitnami/rabbitmq/etc/rabbitmq/rabbitmq.conf Starting broker...2021-03-22 13:48:01.597 [info] <0.269.0> node : rabbit@rabbitmq-0.rabbitmq-headless.sai.svc.cluster.local home dir : /opt/bitnami/rabbitmq/.rabbitmq config file(s) : /opt/bitnami/rabbitmq/etc/rabbitmq/rabbitmq.conf cookie hash : RxahyJ80werAKhDjXt6lvA== log(s) : database dir : /bitnami/rabbitmq/mnesia/rabbit@rabbitmq-0.rabbitmq-headless.sai.svc.cluster.local 2021-03-22 13:48:01.699 [debug] <0.294.0> Lager installed handler lager_backend_throttle into lager_event 2021-03-22 13:48:02.665 [info] <0.269.0> Feature flags: list of feature flags found: 2021-03-22 13:48:02.665 [info] <0.269.0> Feature flags: [ ] drop_unroutable_metric 2021-03-22 13:48:02.665 [info] <0.269.0> Feature flags: [ ] empty_basic_get_metric 2021-03-22 13:48:02.665 [info] <0.269.0> Feature flags: [ ] implicit_default_bindings 2021-03-22 13:48:02.666 [info] <0.269.0> Feature flags: [ ] maintenance_mode_status 2021-03-22 13:48:02.666 [info] <0.269.0> Feature flags: [ ] quorum_queue 2021-03-22 13:48:02.666 [info] <0.269.0> Feature flags: [ ] user_limits 2021-03-22 13:48:02.666 [info] <0.269.0> Feature flags: [ ] virtual_host_metadata 2021-03-22 13:48:02.666 [info] <0.269.0> Feature flags: feature flag states written to disk: yes 2021-03-22 13:48:05.855 [info] <0.269.0> Running boot step pre_boot defined by app rabbit 2021-03-22 13:48:05.856 [info] <0.269.0> Running boot step rabbit_core_metrics defined by app rabbit 2021-03-22 13:48:05.856 [info] <0.269.0> Running boot step rabbit_alarm defined by app rabbit 2021-03-22 13:48:05.860 [info] <0.412.0> Memory high watermark set to 409 MiB (429496729 bytes) of 1024 MiB (1073741824 bytes) total 2021-03-22 13:48:05.867 [info] <0.414.0> Enabling free disk space monitoring 2021-03-22 13:48:05.867 [info] <0.414.0> Disk free limit set to 50MB 2021-03-22 13:48:05.871 [info] <0.269.0> Running boot step code_server_cache defined by app rabbit 2021-03-22 13:48:05.871 [info] <0.269.0> Running boot step file_handle_cache defined by app rabbit 2021-03-22 13:48:05.872 [info] <0.417.0> Limiting to approx 65439 file handles (58893 sockets) 2021-03-22 13:48:05.872 [info] <0.418.0> FHC read buffering: OFF 2021-03-22 13:48:05.872 [info] <0.418.0> FHC write buffering: ON 2021-03-22 13:48:05.872 [info] <0.269.0> Running boot step worker_pool defined by app rabbit 2021-03-22 13:48:05.873 [info] <0.359.0> Will use 8 processes for default worker pool 2021-03-22 13:48:05.873 [info] <0.359.0> Starting worker pool 'worker_pool' with 8 processes in it 2021-03-22 13:48:05.874 [info] <0.269.0> Running boot step database defined by app rabbit 2021-03-22 13:48:05.874 [info] <0.269.0> Node database directory at /bitnami/rabbitmq/mnesia/rabbit@rabbitmq-0.rabbitmq-headless.sai.svc.cluster.local is empty. Assuming we need to join an existing cluster or initialise from scratch... 2021-03-22 13:48:05.874 [info] <0.269.0> Configured peer discovery backend: rabbit_peer_discovery_k8s 2021-03-22 13:48:05.874 [info] <0.269.0> Will try to lock with peer discovery backend rabbit_peer_discovery_k8s 2021-03-22 13:48:05.874 [info] <0.269.0> Peer discovery backend does not support locking, falling back to randomized delay 2021-03-22 13:48:05.874 [info] <0.269.0> Peer discovery backend rabbit_peer_discovery_k8s supports registration. 2021-03-22 13:48:05.874 [info] <0.269.0> Will wait for 263 milliseconds before proceeding with registration... 2021-03-22 13:48:06.164 [info] <0.269.0> k8s endpoint listing returned nodes not yet ready: rabbitmq-0 2021-03-22 13:48:06.164 [info] <0.269.0> All discovered existing cluster peers: 2021-03-22 13:48:06.164 [info] <0.269.0> Discovered no peer nodes to cluster with. Some discovery backends can filter nodes out based on a readiness criteria. Enabling debug logging might help troubleshoot. 2021-03-22 13:48:06.167 [info] <0.44.0> Application mnesia exited with reason: stopped 2021-03-22 13:48:06.167 [info] <0.44.0> Application mnesia exited with reason: stopped 2021-03-22 13:48:06.186 [info] <0.44.0> Application mnesia started on node 'rabbit@rabbitmq-0.rabbitmq-headless.sai.svc.cluster.local' 2021-03-22 13:48:06.253 [info] <0.269.0> Waiting for Mnesia tables for 30000 ms, 9 retries left 2021-03-22 13:48:06.253 [info] <0.269.0> Successfully synced tables from a peer 2021-03-22 13:48:06.282 [info] <0.269.0> Waiting for Mnesia tables for 30000 ms, 9 retries left 2021-03-22 13:48:06.282 [info] <0.269.0> Successfully synced tables from a peer 2021-03-22 13:48:06.282 [info] <0.269.0> Feature flag `drop_unroutable_metric`: supported, attempt to enable... 2021-03-22 13:48:06.282 [info] <0.269.0> Feature flag `drop_unroutable_metric`: mark as enabled=state_changing 2021-03-22 13:48:06.291 [info] <0.269.0> Feature flags: list of feature flags found: 2021-03-22 13:48:06.291 [info] <0.269.0> Feature flags: [~] drop_unroutable_metric 2021-03-22 13:48:06.291 [info] <0.269.0> Feature flags: [ ] empty_basic_get_metric 2021-03-22 13:48:06.291 [info] <0.269.0> Feature flags: [ ] implicit_default_bindings 2021-03-22 13:48:06.291 [info] <0.269.0> Feature flags: [ ] maintenance_mode_status 2021-03-22 13:48:06.291 [info] <0.269.0> Feature flags: [ ] quorum_queue 2021-03-22 13:48:06.292 [info] <0.269.0> Feature flags: [ ] user_limits 2021-03-22 13:48:06.292 [info] <0.269.0> Feature flags: [ ] virtual_host_metadata 2021-03-22 13:48:06.292 [info] <0.269.0> Feature flags: feature flag states written to disk: yes 2021-03-22 13:48:06.301 [info] <0.269.0> Feature flag `drop_unroutable_metric`: mark as enabled=true 2021-03-22 13:48:06.311 [info] <0.269.0> Feature flags: list of feature flags found: 2021-03-22 13:48:06.311 [info] <0.269.0> Feature flags: [x] drop_unroutable_metric 2021-03-22 13:48:06.311 [info] <0.269.0> Feature flags: [ ] empty_basic_get_metric 2021-03-22 13:48:06.311 [info] <0.269.0> Feature flags: [ ] implicit_default_bindings 2021-03-22 13:48:06.312 [info] <0.269.0> Feature flags: [ ] maintenance_mode_status 2021-03-22 13:48:06.312 [info] <0.269.0> Feature flags: [ ] quorum_queue 2021-03-22 13:48:06.312 [info] <0.269.0> Feature flags: [ ] user_limits 2021-03-22 13:48:06.312 [info] <0.269.0> Feature flags: [ ] virtual_host_metadata 2021-03-22 13:48:06.312 [info] <0.269.0> Feature flags: feature flag states written to disk: yes 2021-03-22 13:48:06.320 [info] <0.269.0> Feature flag `empty_basic_get_metric`: supported, attempt to enable... 2021-03-22 13:48:06.321 [info] <0.269.0> Feature flag `empty_basic_get_metric`: mark as enabled=state_changing 2021-03-22 13:48:06.329 [info] <0.269.0> Feature flags: list of feature flags found: 2021-03-22 13:48:06.330 [info] <0.269.0> Feature flags: [x] drop_unroutable_metric 2021-03-22 13:48:06.330 [info] <0.269.0> Feature flags: [~] empty_basic_get_metric 2021-03-22 13:48:06.330 [info] <0.269.0> Feature flags: [ ] implicit_default_bindings 2021-03-22 13:48:06.330 [info] <0.269.0> Feature flags: [ ] maintenance_mode_status 2021-03-22 13:48:06.330 [info] <0.269.0> Feature flags: [ ] quorum_queue 2021-03-22 13:48:06.330 [info] <0.269.0> Feature flags: [ ] user_limits 2021-03-22 13:48:06.330 [info] <0.269.0> Feature flags: [ ] virtual_host_metadata 2021-03-22 13:48:06.330 [info] <0.269.0> Feature flags: feature flag states written to disk: yes 2021-03-22 13:48:06.338 [info] <0.269.0> Feature flag `empty_basic_get_metric`: mark as enabled=true 2021-03-22 13:48:06.349 [info] <0.269.0> Feature flags: list of feature flags found: 2021-03-22 13:48:06.349 [info] <0.269.0> Feature flags: [x] drop_unroutable_metric 2021-03-22 13:48:06.349 [info] <0.269.0> Feature flags: [x] empty_basic_get_metric 2021-03-22 13:48:06.349 [info] <0.269.0> Feature flags: [ ] implicit_default_bindings 2021-03-22 13:48:06.349 [info] <0.269.0> Feature flags: [ ] maintenance_mode_status 2021-03-22 13:48:06.349 [info] <0.269.0> Feature flags: [ ] quorum_queue 2021-03-22 13:48:06.349 [info] <0.269.0> Feature flags: [ ] user_limits 2021-03-22 13:48:06.349 [info] <0.269.0> Feature flags: [ ] virtual_host_metadata 2021-03-22 13:48:06.349 [info] <0.269.0> Feature flags: feature flag states written to disk: yes 2021-03-22 13:48:06.357 [info] <0.269.0> Feature flag `implicit_default_bindings`: supported, attempt to enable... 2021-03-22 13:48:06.358 [info] <0.269.0> Feature flag `implicit_default_bindings`: mark as enabled=state_changing 2021-03-22 13:48:06.367 [info] <0.269.0> Feature flags: list of feature flags found: 2021-03-22 13:48:06.367 [info] <0.269.0> Feature flags: [x] drop_unroutable_metric 2021-03-22 13:48:06.395 [info] <0.269.0> Feature flags: [x] empty_basic_get_metric 2021-03-22 13:48:06.395 [info] <0.269.0> Feature flags: [~] implicit_default_bindings 2021-03-22 13:48:06.395 [info] <0.269.0> Feature flags: [ ] maintenance_mode_status 2021-03-22 13:48:06.395 [info] <0.269.0> Feature flags: [ ] quorum_queue 2021-03-22 13:48:06.395 [info] <0.269.0> Feature flags: [ ] user_limits 2021-03-22 13:48:06.395 [info] <0.269.0> Feature flags: [ ] virtual_host_metadata 2021-03-22 13:48:06.395 [info] <0.269.0> Feature flags: feature flag states written to disk: yes 2021-03-22 13:48:06.404 [info] <0.269.0> Waiting for Mnesia tables for 30000 ms, 0 retries left 2021-03-22 13:48:06.404 [info] <0.269.0> Successfully synced tables from a peer 2021-03-22 13:48:06.404 [info] <0.269.0> Feature flag `implicit_default_bindings`: mark as enabled=true 2021-03-22 13:48:06.415 [info] <0.269.0> Feature flags: list of feature flags found: 2021-03-22 13:48:06.415 [info] <0.269.0> Feature flags: [x] drop_unroutable_metric 2021-03-22 13:48:06.416 [info] <0.269.0> Feature flags: [x] empty_basic_get_metric 2021-03-22 13:48:06.416 [info] <0.269.0> Feature flags: [x] implicit_default_bindings 2021-03-22 13:48:06.416 [info] <0.269.0> Feature flags: [ ] maintenance_mode_status 2021-03-22 13:48:06.416 [info] <0.269.0> Feature flags: [ ] quorum_queue 2021-03-22 13:48:06.416 [info] <0.269.0> Feature flags: [ ] user_limits 2021-03-22 13:48:06.416 [info] <0.269.0> Feature flags: [ ] virtual_host_metadata 2021-03-22 13:48:06.416 [info] <0.269.0> Feature flags: feature flag states written to disk: yes 2021-03-22 13:48:06.424 [info] <0.269.0> Feature flag `maintenance_mode_status`: supported, attempt to enable... 2021-03-22 13:48:06.424 [info] <0.269.0> Feature flag `maintenance_mode_status`: mark as enabled=state_changing 2021-03-22 13:48:06.434 [info] <0.269.0> Feature flags: list of feature flags found: 2021-03-22 13:48:06.434 [info] <0.269.0> Feature flags: [x] drop_unroutable_metric 2021-03-22 13:48:06.434 [info] <0.269.0> Feature flags: [x] empty_basic_get_metric 2021-03-22 13:48:06.434 [info] <0.269.0> Feature flags: [x] implicit_default_bindings 2021-03-22 13:48:06.434 [info] <0.269.0> Feature flags: [~] maintenance_mode_status 2021-03-22 13:48:06.434 [info] <0.269.0> Feature flags: [ ] quorum_queue 2021-03-22 13:48:06.434 [info] <0.269.0> Feature flags: [ ] user_limits 2021-03-22 13:48:06.434 [info] <0.269.0> Feature flags: [ ] virtual_host_metadata 2021-03-22 13:48:06.434 [info] <0.269.0> Feature flags: feature flag states written to disk: yes 2021-03-22 13:48:06.443 [info] <0.269.0> Creating table rabbit_node_maintenance_states for feature flag `maintenance_mode_status` 2021-03-22 13:48:06.447 [info] <0.269.0> Feature flag `maintenance_mode_status`: mark as enabled=true 2021-03-22 13:48:06.457 [info] <0.269.0> Feature flags: list of feature flags found: 2021-03-22 13:48:06.457 [info] <0.269.0> Feature flags: [x] drop_unroutable_metric 2021-03-22 13:48:06.457 [info] <0.269.0> Feature flags: [x] empty_basic_get_metric 2021-03-22 13:48:06.458 [info] <0.269.0> Feature flags: [x] implicit_default_bindings 2021-03-22 13:48:06.458 [info] <0.269.0> Feature flags: [x] maintenance_mode_status 2021-03-22 13:48:06.458 [info] <0.269.0> Feature flags: [ ] quorum_queue 2021-03-22 13:48:06.458 [info] <0.269.0> Feature flags: [ ] user_limits 2021-03-22 13:48:06.458 [info] <0.269.0> Feature flags: [ ] virtual_host_metadata 2021-03-22 13:48:06.458 [info] <0.269.0> Feature flags: feature flag states written to disk: yes 2021-03-22 13:48:06.467 [info] <0.269.0> Feature flag `quorum_queue`: supported, attempt to enable... 2021-03-22 13:48:06.467 [info] <0.269.0> Feature flag `quorum_queue`: mark as enabled=state_changing 2021-03-22 13:48:06.504 [info] <0.269.0> Feature flags: list of feature flags found: 2021-03-22 13:48:06.504 [info] <0.269.0> Feature flags: [x] drop_unroutable_metric 2021-03-22 13:48:06.504 [info] <0.269.0> Feature flags: [x] empty_basic_get_metric 2021-03-22 13:48:06.504 [info] <0.269.0> Feature flags: [x] implicit_default_bindings 2021-03-22 13:48:06.505 [info] <0.269.0> Feature flags: [x] maintenance_mode_status 2021-03-22 13:48:06.505 [info] <0.269.0> Feature flags: [~] quorum_queue 2021-03-22 13:48:06.505 [info] <0.269.0> Feature flags: [ ] user_limits 2021-03-22 13:48:06.505 [info] <0.269.0> Feature flags: [ ] virtual_host_metadata 2021-03-22 13:48:06.505 [info] <0.269.0> Feature flags: feature flag states written to disk: yes 2021-03-22 13:48:06.514 [info] <0.269.0> Waiting for Mnesia tables for 30000 ms, 9 retries left 2021-03-22 13:48:06.514 [info] <0.269.0> Successfully synced tables from a peer 2021-03-22 13:48:06.514 [info] <0.269.0> Feature flag `quorum_queue`: migrating Mnesia table rabbit_queue... 2021-03-22 13:48:06.526 [info] <0.269.0> Feature flag `quorum_queue`: migrating Mnesia table rabbit_durable_queue... 2021-03-22 13:48:06.538 [info] <0.269.0> Feature flag `quorum_queue`: Mnesia tables migration done 2021-03-22 13:48:06.538 [info] <0.269.0> Feature flag `quorum_queue`: mark as enabled=true 2021-03-22 13:48:06.549 [info] <0.269.0> Feature flags: list of feature flags found: 2021-03-22 13:48:06.549 [info] <0.269.0> Feature flags: [x] drop_unroutable_metric 2021-03-22 13:48:06.549 [info] <0.269.0> Feature flags: [x] empty_basic_get_metric 2021-03-22 13:48:06.549 [info] <0.269.0> Feature flags: [x] implicit_default_bindings 2021-03-22 13:48:06.549 [info] <0.269.0> Feature flags: [x] maintenance_mode_status 2021-03-22 13:48:06.549 [info] <0.269.0> Feature flags: [x] quorum_queue 2021-03-22 13:48:06.549 [info] <0.269.0> Feature flags: [ ] user_limits 2021-03-22 13:48:06.549 [info] <0.269.0> Feature flags: [ ] virtual_host_metadata 2021-03-22 13:48:06.549 [info] <0.269.0> Feature flags: feature flag states written to disk: yes 2021-03-22 13:48:06.558 [info] <0.269.0> Feature flag `user_limits`: supported, attempt to enable... 2021-03-22 13:48:06.558 [info] <0.269.0> Feature flag `user_limits`: mark as enabled=state_changing 2021-03-22 13:48:06.596 [info] <0.269.0> Feature flags: list of feature flags found: 2021-03-22 13:48:06.596 [info] <0.269.0> Feature flags: [x] drop_unroutable_metric 2021-03-22 13:48:06.597 [info] <0.269.0> Feature flags: [x] empty_basic_get_metric 2021-03-22 13:48:06.597 [info] <0.269.0> Feature flags: [x] implicit_default_bindings 2021-03-22 13:48:06.597 [info] <0.269.0> Feature flags: [x] maintenance_mode_status 2021-03-22 13:48:06.597 [info] <0.269.0> Feature flags: [x] quorum_queue 2021-03-22 13:48:06.597 [info] <0.269.0> Feature flags: [~] user_limits 2021-03-22 13:48:06.597 [info] <0.269.0> Feature flags: [ ] virtual_host_metadata 2021-03-22 13:48:06.597 [info] <0.269.0> Feature flags: feature flag states written to disk: yes 2021-03-22 13:48:06.606 [info] <0.269.0> Waiting for Mnesia tables for 30000 ms, 9 retries left 2021-03-22 13:48:06.606 [info] <0.269.0> Successfully synced tables from a peer 2021-03-22 13:48:06.619 [info] <0.269.0> Feature flag `user_limits`: mark as enabled=true 2021-03-22 13:48:06.629 [info] <0.269.0> Feature flags: list of feature flags found: 2021-03-22 13:48:06.629 [info] <0.269.0> Feature flags: [x] drop_unroutable_metric 2021-03-22 13:48:06.629 [info] <0.269.0> Feature flags: [x] empty_basic_get_metric 2021-03-22 13:48:06.629 [info] <0.269.0> Feature flags: [x] implicit_default_bindings 2021-03-22 13:48:06.629 [info] <0.269.0> Feature flags: [x] maintenance_mode_status 2021-03-22 13:48:06.629 [info] <0.269.0> Feature flags: [x] quorum_queue 2021-03-22 13:48:06.629 [info] <0.269.0> Feature flags: [x] user_limits 2021-03-22 13:48:06.629 [info] <0.269.0> Feature flags: [ ] virtual_host_metadata 2021-03-22 13:48:06.629 [info] <0.269.0> Feature flags: feature flag states written to disk: yes 2021-03-22 13:48:06.638 [info] <0.269.0> Feature flag `virtual_host_metadata`: supported, attempt to enable... 2021-03-22 13:48:06.638 [info] <0.269.0> Feature flag `virtual_host_metadata`: mark as enabled=state_changing 2021-03-22 13:48:06.648 [info] <0.269.0> Feature flags: list of feature flags found: 2021-03-22 13:48:06.648 [info] <0.269.0> Feature flags: [x] drop_unroutable_metric 2021-03-22 13:48:06.648 [info] <0.269.0> Feature flags: [x] empty_basic_get_metric 2021-03-22 13:48:06.648 [info] <0.269.0> Feature flags: [x] implicit_default_bindings 2021-03-22 13:48:06.648 [info] <0.269.0> Feature flags: [x] maintenance_mode_status 2021-03-22 13:48:06.648 [info] <0.269.0> Feature flags: [x] quorum_queue 2021-03-22 13:48:06.648 [info] <0.269.0> Feature flags: [x] user_limits 2021-03-22 13:48:06.648 [info] <0.269.0> Feature flags: [~] virtual_host_metadata 2021-03-22 13:48:06.648 [info] <0.269.0> Feature flags: feature flag states written to disk: yes 2021-03-22 13:48:06.657 [info] <0.269.0> Waiting for Mnesia tables for 30000 ms, 9 retries left 2021-03-22 13:48:06.657 [info] <0.269.0> Successfully synced tables from a peer 2021-03-22 13:48:06.700 [info] <0.269.0> Feature flag `virtual_host_metadata`: mark as enabled=true 2021-03-22 13:48:06.710 [info] <0.269.0> Feature flags: list of feature flags found: 2021-03-22 13:48:06.710 [info] <0.269.0> Feature flags: [x] drop_unroutable_metric 2021-03-22 13:48:06.710 [info] <0.269.0> Feature flags: [x] empty_basic_get_metric 2021-03-22 13:48:06.710 [info] <0.269.0> Feature flags: [x] implicit_default_bindings 2021-03-22 13:48:06.710 [info] <0.269.0> Feature flags: [x] maintenance_mode_status 2021-03-22 13:48:06.711 [info] <0.269.0> Feature flags: [x] quorum_queue 2021-03-22 13:48:06.711 [info] <0.269.0> Feature flags: [x] user_limits 2021-03-22 13:48:06.711 [info] <0.269.0> Feature flags: [x] virtual_host_metadata 2021-03-22 13:48:06.711 [info] <0.269.0> Feature flags: feature flag states written to disk: yes 2021-03-22 13:48:06.720 [info] <0.269.0> Waiting for Mnesia tables for 30000 ms, 9 retries left 2021-03-22 13:48:06.720 [info] <0.269.0> Successfully synced tables from a peer 2021-03-22 13:48:06.749 [info] <0.269.0> Waiting for Mnesia tables for 30000 ms, 9 retries left 2021-03-22 13:48:06.749 [info] <0.269.0> Successfully synced tables from a peer 2021-03-22 13:48:06.749 [info] <0.269.0> Will register with peer discovery backend rabbit_peer_discovery_k8s 2021-03-22 13:48:06.757 [info] <0.269.0> Running boot step database_sync defined by app rabbit 2021-03-22 13:48:06.757 [info] <0.269.0> Running boot step feature_flags defined by app rabbit 2021-03-22 13:48:06.757 [info] <0.269.0> Running boot step codec_correctness_check defined by app rabbit 2021-03-22 13:48:06.757 [info] <0.269.0> Running boot step external_infrastructure defined by app rabbit 2021-03-22 13:48:06.757 [info] <0.269.0> Running boot step rabbit_registry defined by app rabbit 2021-03-22 13:48:06.757 [info] <0.269.0> Running boot step rabbit_auth_mechanism_cr_demo defined by app rabbit 2021-03-22 13:48:06.758 [info] <0.269.0> Running boot step rabbit_queue_location_random defined by app rabbit 2021-03-22 13:48:06.758 [info] <0.269.0> Running boot step rabbit_event defined by app rabbit 2021-03-22 13:48:06.758 [info] <0.269.0> Running boot step rabbit_auth_mechanism_amqplain defined by app rabbit 2021-03-22 13:48:06.758 [info] <0.269.0> Running boot step rabbit_auth_mechanism_plain defined by app rabbit 2021-03-22 13:48:06.758 [info] <0.269.0> Running boot step rabbit_exchange_type_direct defined by app rabbit 2021-03-22 13:48:06.758 [info] <0.269.0> Running boot step rabbit_exchange_type_fanout defined by app rabbit 2021-03-22 13:48:06.758 [info] <0.269.0> Running boot step rabbit_exchange_type_headers defined by app rabbit 2021-03-22 13:48:06.759 [info] <0.269.0> Running boot step rabbit_exchange_type_topic defined by app rabbit 2021-03-22 13:48:06.759 [info] <0.269.0> Running boot step rabbit_mirror_queue_mode_all defined by app rabbit 2021-03-22 13:48:06.759 [info] <0.269.0> Running boot step rabbit_mirror_queue_mode_exactly defined by app rabbit 2021-03-22 13:48:06.759 [info] <0.269.0> Running boot step rabbit_mirror_queue_mode_nodes defined by app rabbit 2021-03-22 13:48:06.759 [info] <0.269.0> Running boot step rabbit_priority_queue defined by app rabbit 2021-03-22 13:48:06.759 [info] <0.269.0> Priority queues enabled, real BQ is rabbit_variable_queue 2021-03-22 13:48:06.759 [info] <0.269.0> Running boot step rabbit_queue_location_client_local defined by app rabbit 2021-03-22 13:48:06.759 [info] <0.269.0> Running boot step rabbit_queue_location_min_masters defined by app rabbit 2021-03-22 13:48:06.759 [info] <0.269.0> Running boot step kernel_ready defined by app rabbit 2021-03-22 13:48:06.759 [info] <0.269.0> Running boot step ldap_pool defined by app rabbitmq_auth_backend_ldap 2021-03-22 13:48:06.759 [info] <0.359.0> Starting worker pool 'ldap_pool' with 64 processes in it 2021-03-22 13:48:06.765 [info] <0.269.0> Running boot step rabbit_sysmon_minder defined by app rabbit 2021-03-22 13:48:06.765 [info] <0.269.0> Running boot step rabbit_epmd_monitor defined by app rabbit 2021-03-22 13:48:06.766 [info] <0.756.0> epmd monitor knows us, inter-node communication (distribution) port: 25672 2021-03-22 13:48:06.766 [info] <0.269.0> Running boot step guid_generator defined by app rabbit 2021-03-22 13:48:06.768 [info] <0.269.0> Running boot step rabbit_node_monitor defined by app rabbit 2021-03-22 13:48:06.768 [info] <0.760.0> Starting rabbit_node_monitor 2021-03-22 13:48:06.769 [info] <0.269.0> Running boot step delegate_sup defined by app rabbit 2021-03-22 13:48:06.770 [info] <0.269.0> Running boot step rabbit_memory_monitor defined by app rabbit 2021-03-22 13:48:06.770 [info] <0.269.0> Running boot step core_initialized defined by app rabbit 2021-03-22 13:48:06.770 [info] <0.269.0> Running boot step upgrade_queues defined by app rabbit 2021-03-22 13:48:06.804 [info] <0.269.0> message_store upgrades: 1 to apply 2021-03-22 13:48:06.804 [info] <0.269.0> message_store upgrades: Applying rabbit_variable_queue:move_messages_to_vhost_store 2021-03-22 13:48:06.805 [info] <0.269.0> message_store upgrades: No durable queues found. Skipping message store migration 2021-03-22 13:48:06.805 [info] <0.269.0> message_store upgrades: Removing the old message store data 2021-03-22 13:48:06.805 [info] <0.269.0> message_store upgrades: All upgrades applied successfully 2021-03-22 13:48:06.833 [info] <0.269.0> Running boot step channel_tracking defined by app rabbit 2021-03-22 13:48:06.838 [info] <0.269.0> Setting up a table for channel tracking on this node: 'tracked_channel_on_node_rabbit@rabbitmq-0.rabbitmq-headless.sai.svc.cluster.local' 2021-03-22 13:48:06.841 [info] <0.269.0> Setting up a table for channel tracking on this node: 'tracked_channel_table_per_user_on_node_rabbit@rabbitmq-0.rabbitmq-headless.sai.svc.cluster.local' 2021-03-22 13:48:06.841 [info] <0.269.0> Running boot step rabbit_channel_tracking_handler defined by app rabbit 2021-03-22 13:48:06.842 [info] <0.269.0> Running boot step connection_tracking defined by app rabbit 2021-03-22 13:48:06.845 [info] <0.269.0> Setting up a table for connection tracking on this node: 'tracked_connection_on_node_rabbit@rabbitmq-0.rabbitmq-headless.sai.svc.cluster.local' 2021-03-22 13:48:06.849 [info] <0.269.0> Setting up a table for per-vhost connection counting on this node: 'tracked_connection_per_vhost_on_node_rabbit@rabbitmq-0.rabbitmq-headless.sai.svc.cluster.local' 2021-03-22 13:48:06.852 [info] <0.269.0> Setting up a table for per-user connection counting on this node: 'tracked_connection_table_per_user_on_node_rabbit@rabbitmq-0.rabbitmq-headless.sai.svc.cluster.local' 2021-03-22 13:48:06.853 [info] <0.269.0> Running boot step rabbit_connection_tracking_handler defined by app rabbit 2021-03-22 13:48:06.853 [info] <0.269.0> Running boot step rabbit_exchange_parameters defined by app rabbit 2021-03-22 13:48:06.853 [info] <0.269.0> Running boot step rabbit_mirror_queue_misc defined by app rabbit 2021-03-22 13:48:06.853 [info] <0.269.0> Running boot step rabbit_policies defined by app rabbit 2021-03-22 13:48:06.854 [info] <0.269.0> Running boot step rabbit_policy defined by app rabbit 2021-03-22 13:48:06.854 [info] <0.269.0> Running boot step rabbit_queue_location_validator defined by app rabbit 2021-03-22 13:48:06.854 [info] <0.269.0> Running boot step rabbit_quorum_memory_manager defined by app rabbit 2021-03-22 13:48:06.854 [info] <0.269.0> Running boot step rabbit_vhost_limit defined by app rabbit 2021-03-22 13:48:06.855 [info] <0.269.0> Running boot step rabbit_mgmt_reset_handler defined by app rabbitmq_management 2021-03-22 13:48:06.855 [info] <0.269.0> Running boot step rabbit_mgmt_db_handler defined by app rabbitmq_management_agent 2021-03-22 13:48:06.855 [info] <0.269.0> Management plugin: using rates mode 'basic' 2021-03-22 13:48:06.855 [info] <0.269.0> Running boot step recovery defined by app rabbit 2021-03-22 13:48:06.856 [info] <0.269.0> Running boot step empty_db_check defined by app rabbit 2021-03-22 13:48:06.856 [info] <0.269.0> Will not seed default virtual host and user: have definitions to load... 2021-03-22 13:48:06.856 [info] <0.269.0> Running boot step rabbit_looking_glass defined by app rabbit 2021-03-22 13:48:06.857 [info] <0.269.0> Running boot step rabbit_core_metrics_gc defined by app rabbit 2021-03-22 13:48:06.857 [info] <0.269.0> Running boot step background_gc defined by app rabbit 2021-03-22 13:48:06.857 [info] <0.269.0> Running boot step routing_ready defined by app rabbit 2021-03-22 13:48:06.857 [info] <0.269.0> Running boot step pre_flight defined by app rabbit 2021-03-22 13:48:06.857 [info] <0.269.0> Running boot step notify_cluster defined by app rabbit 2021-03-22 13:48:06.857 [info] <0.269.0> Running boot step networking defined by app rabbit 2021-03-22 13:48:06.857 [info] <0.269.0> Running boot step definition_import_worker_pool defined by app rabbit 2021-03-22 13:48:06.857 [info] <0.359.0> Starting worker pool 'definition_import_pool' with 8 processes in it 2021-03-22 13:48:06.858 [info] <0.269.0> Running boot step cluster_name defined by app rabbit 2021-03-22 13:48:06.858 [info] <0.269.0> Initialising internal cluster ID to 'rabbitmq-cluster-id-OwODFKfFUT3Tko4IzA7NKQ' 2021-03-22 13:48:06.860 [info] <0.269.0> Running boot step direct_client defined by app rabbit 2021-03-22 13:48:06.860 [info] <0.269.0> Running boot step rabbit_management_load_definitions defined by app rabbitmq_management 2021-03-22 13:48:06.860 [info] <0.829.0> Resetting node maintenance status 2021-03-22 13:48:06.860 [info] <0.44.0> Application rabbit started on node 'rabbit@rabbitmq-0.rabbitmq-headless.sai.svc.cluster.local' 2021-03-22 13:48:06.865 [info] <0.44.0> Application rabbitmq_management_agent started on node 'rabbit@rabbitmq-0.rabbitmq-headless.sai.svc.cluster.local' 2021-03-22 13:48:06.865 [info] <0.44.0> Application cowlib started on node 'rabbit@rabbitmq-0.rabbitmq-headless.sai.svc.cluster.local' 2021-03-22 13:48:06.865 [info] <0.44.0> Application cowboy started on node 'rabbit@rabbitmq-0.rabbitmq-headless.sai.svc.cluster.local' 2021-03-22 13:48:06.865 [info] <0.44.0> Application rabbitmq_web_dispatch started on node 'rabbit@rabbitmq-0.rabbitmq-headless.sai.svc.cluster.local' 2021-03-22 13:48:06.866 [info] <0.44.0> Application amqp_client started on node 'rabbit@rabbitmq-0.rabbitmq-headless.sai.svc.cluster.local' 2021-03-22 13:48:06.913 [info] <0.888.0> Management plugin: HTTP (non-TLS) listener started on port 15672 2021-03-22 13:48:06.913 [info] <0.995.0> Statistics database started. 2021-03-22 13:48:06.913 [info] <0.994.0> Starting worker pool 'management_worker_pool' with 3 processes in it 2021-03-22 13:48:06.914 [info] <0.44.0> Application rabbitmq_management started on node 'rabbit@rabbitmq-0.rabbitmq-headless.sai.svc.cluster.local' 2021-03-22 13:48:06.939 [info] <0.44.0> Application prometheus started on node 'rabbit@rabbitmq-0.rabbitmq-headless.sai.svc.cluster.local' 2021-03-22 13:48:06.939 [info] <0.44.0> Application eldap started on node 'rabbit@rabbitmq-0.rabbitmq-headless.sai.svc.cluster.local' 2021-03-22 13:48:06.939 [warning] <0.1009.0> LDAP plugin loaded, but rabbit_auth_backend_ldap is not in the list of auth_backends. LDAP auth will not work. 2021-03-22 13:48:06.939 [info] <0.44.0> Application rabbitmq_auth_backend_ldap started on node 'rabbit@rabbitmq-0.rabbitmq-headless.sai.svc.cluster.local' 2021-03-22 13:48:06.939 [info] <0.1015.0> Peer discovery: enabling node cleanup (will only log warnings). Check interval: 10 seconds. 2021-03-22 13:48:06.940 [info] <0.44.0> Application rabbitmq_peer_discovery_common started on node 'rabbit@rabbitmq-0.rabbitmq-headless.sai.svc.cluster.local' 2021-03-22 13:48:06.940 [info] <0.44.0> Application rabbitmq_peer_discovery_k8s started on node 'rabbit@rabbitmq-0.rabbitmq-headless.sai.svc.cluster.local' 2021-03-22 13:48:06.944 [info] <0.1023.0> Prometheus metrics: HTTP (non-TLS) listener started on port 9419 2021-03-22 13:48:06.944 [info] <0.44.0> Application rabbitmq_prometheus started on node 'rabbit@rabbitmq-0.rabbitmq-headless.sai.svc.cluster.local' 2021-03-22 13:48:06.945 [info] <0.829.0> Applying definitions from file at '/app/load_definition.json' 2021-03-22 13:48:06.945 [info] <0.829.0> Asked to import definitions. Acting user: rmq-internal 2021-03-22 13:48:06.945 [info] <0.829.0> Importing concurrently 1 users... 2021-03-22 13:48:06.947 [info] <0.819.0> Created user 'user' 2021-03-22 13:48:06.949 [info] <0.819.0> Successfully set user tags for user 'user' to [administrator] 2021-03-22 13:48:06.949 [info] <0.829.0> Importing concurrently 1 vhosts... 2021-03-22 13:48:06.949 [info] <0.819.0> Adding vhost '/' without a description 2021-03-22 13:48:06.963 [info] <0.1132.0> Making sure data directory '/bitnami/rabbitmq/mnesia/rabbit@rabbitmq-0.rabbitmq-headless.sai.svc.cluster.local/msg_stores/vhosts/628WB79CIFDYO9LJI6DKMI09L' for vhost '/' exists 2021-03-22 13:48:06.967 [info] <0.1132.0> Starting message stores for vhost '/' 2021-03-22 13:48:06.967 [info] <0.1136.0> Message store "628WB79CIFDYO9LJI6DKMI09L/msg_store_transient": using rabbit_msg_store_ets_index to provide index 2021-03-22 13:48:06.969 [info] <0.1132.0> Started message store of type transient for vhost '/' 2021-03-22 13:48:06.969 [info] <0.1140.0> Message store "628WB79CIFDYO9LJI6DKMI09L/msg_store_persistent": using rabbit_msg_store_ets_index to provide index 2021-03-22 13:48:06.970 [warning] <0.1140.0> Message store "628WB79CIFDYO9LJI6DKMI09L/msg_store_persistent": rebuilding indices from scratch 2021-03-22 13:48:06.971 [info] <0.1132.0> Started message store of type persistent for vhost '/' 2021-03-22 13:48:06.972 [info] <0.829.0> Importing sequentially 1 policies... 2021-03-22 13:48:06.977 [info] <0.829.0> Ready to start client connection listeners 2021-03-22 13:48:06.980 [info] <0.1174.0> started TCP listener on [::]:5672 2021-03-22 13:48:07.823 [info] <0.829.0> Server startup complete; 7 plugins started. * rabbitmq_prometheus * rabbitmq_peer_discovery_k8s * rabbitmq_peer_discovery_common * rabbitmq_auth_backend_ldap * rabbitmq_management * rabbitmq_web_dispatch * rabbitmq_management_agent completed with 7 plugins. 2021-03-22 13:48:07.823 [info] <0.829.0> Resetting node maintenance status Applications 'rabbit_and_plugins' are running on node 'rabbit@rabbitmq-0.rabbitmq-headless.sai.svc.cluster.local' rabbitmq 13:48:07.91 DEBUG ==> Changing password for user 'user'... Changing password for user "user" ... 2021-03-22 13:48:08.619 [info] <0.1192.0> Successfully changed password for user 'user' rabbitmq 13:48:08.62 INFO ==> Stopping RabbitMQ... Stopping and halting node rabbit@rabbitmq-0.rabbitmq-headless.sai.svc.cluster.local ... 2021-03-22 13:48:09.316 [info] <0.1197.0> RabbitMQ is asked to stop... 2021-03-22 13:48:09.729 [info] <0.1197.0> Stopping RabbitMQ applications and their dependencies in the following order: rabbitmq_management amqp_client rabbitmq_prometheus rabbitmq_web_dispatch cowboy cowlib rabbitmq_management_agent rabbitmq_peer_discovery_k8s rabbitmq_peer_discovery_common rabbitmq_auth_backend_ldap rabbit rabbitmq_prelaunch rabbit_common prometheus sysmon_handler os_mon ra mnesia 2021-03-22 13:48:09.729 [info] <0.1197.0> Stopping application 'rabbitmq_management' 2021-03-22 13:48:09.733 [warning] <0.880.0> HTTP listener registry could not find context rabbitmq_management_tls 2021-03-22 13:48:09.735 [info] <0.1197.0> Stopping application 'amqp_client' 2021-03-22 13:48:09.735 [info] <0.44.0> Application rabbitmq_management exited with reason: stopped 2021-03-22 13:48:09.735 [info] <0.44.0> Application rabbitmq_management exited with reason: stopped 2021-03-22 13:48:09.736 [info] <0.1197.0> Stopping application 'rabbitmq_prometheus' 2021-03-22 13:48:09.736 [info] <0.44.0> Application amqp_client exited with reason: stopped 2021-03-22 13:48:09.736 [info] <0.44.0> Application amqp_client exited with reason: stopped 2021-03-22 13:48:09.739 [warning] <0.880.0> HTTP listener registry could not find context rabbitmq_prometheus_tls 2021-03-22 13:48:09.740 [info] <0.1197.0> Stopping application 'rabbitmq_web_dispatch' 2021-03-22 13:48:09.740 [info] <0.44.0> Application rabbitmq_prometheus exited with reason: stopped 2021-03-22 13:48:09.740 [info] <0.44.0> Application rabbitmq_prometheus exited with reason: stopped 2021-03-22 13:48:09.742 [info] <0.44.0> Application rabbitmq_web_dispatch exited with reason: stopped 2021-03-22 13:48:09.742 [info] <0.1197.0> Stopping application 'cowboy' 2021-03-22 13:48:09.742 [info] <0.44.0> Application rabbitmq_web_dispatch exited with reason: stopped 2021-03-22 13:48:09.743 [info] <0.1197.0> Stopping application 'cowlib' 2021-03-22 13:48:09.743 [info] <0.44.0> Application cowboy exited with reason: stopped 2021-03-22 13:48:09.743 [info] <0.1197.0> Stopping application 'rabbitmq_management_agent' 2021-03-22 13:48:09.743 [info] <0.44.0> Application cowboy exited with reason: stopped 2021-03-22 13:48:09.743 [info] <0.44.0> Application cowlib exited with reason: stopped 2021-03-22 13:48:09.743 [info] <0.44.0> Application cowlib exited with reason: stopped 2021-03-22 13:48:09.745 [info] <0.1197.0> Stopping application 'rabbitmq_peer_discovery_k8s' 2021-03-22 13:48:09.745 [info] <0.44.0> Application rabbitmq_management_agent exited with reason: stopped 2021-03-22 13:48:09.745 [info] <0.44.0> Application rabbitmq_management_agent exited with reason: stopped 2021-03-22 13:48:09.746 [info] <0.44.0> Application rabbitmq_peer_discovery_k8s exited with reason: stopped 2021-03-22 13:48:09.746 [info] <0.1197.0> Stopping application 'rabbitmq_peer_discovery_common' 2021-03-22 13:48:09.746 [info] <0.44.0> Application rabbitmq_peer_discovery_k8s exited with reason: stopped 2021-03-22 13:48:09.747 [info] <0.1197.0> Stopping application 'rabbitmq_auth_backend_ldap' 2021-03-22 13:48:09.747 [info] <0.44.0> Application rabbitmq_peer_discovery_common exited with reason: stopped 2021-03-22 13:48:09.747 [info] <0.44.0> Application rabbitmq_peer_discovery_common exited with reason: stopped 2021-03-22 13:48:09.749 [info] <0.44.0> Application rabbitmq_auth_backend_ldap exited with reason: stopped 2021-03-22 13:48:09.749 [info] <0.1197.0> Stopping application 'rabbit' 2021-03-22 13:48:09.749 [info] <0.44.0> Application rabbitmq_auth_backend_ldap exited with reason: stopped 2021-03-22 13:48:09.749 [info] <0.269.0> Will unregister with peer discovery backend rabbit_peer_discovery_k8s 2021-03-22 13:48:09.749 [info] <0.1174.0> stopped TCP listener on [::]:5672 2021-03-22 13:48:09.750 [info] <0.1198.0> Closing all connections in vhost '/' on node 'rabbit@rabbitmq-0.rabbitmq-headless.sai.svc.cluster.local' because the vhost is stopping 2021-03-22 13:48:09.750 [info] <0.1140.0> Stopping message store for directory '/bitnami/rabbitmq/mnesia/rabbit@rabbitmq-0.rabbitmq-headless.sai.svc.cluster.local/msg_stores/vhosts/628WB79CIFDYO9LJI6DKMI09L/msg_store_persistent' 2021-03-22 13:48:09.754 [info] <0.1140.0> Message store for directory '/bitnami/rabbitmq/mnesia/rabbit@rabbitmq-0.rabbitmq-headless.sai.svc.cluster.local/msg_stores/vhosts/628WB79CIFDYO9LJI6DKMI09L/msg_store_persistent' is stopped 2021-03-22 13:48:09.754 [info] <0.1136.0> Stopping message store for directory '/bitnami/rabbitmq/mnesia/rabbit@rabbitmq-0.rabbitmq-headless.sai.svc.cluster.local/msg_stores/vhosts/628WB79CIFDYO9LJI6DKMI09L/msg_store_transient' 2021-03-22 13:48:09.757 [info] <0.1136.0> Message store for directory '/bitnami/rabbitmq/mnesia/rabbit@rabbitmq-0.rabbitmq-headless.sai.svc.cluster.local/msg_stores/vhosts/628WB79CIFDYO9LJI6DKMI09L/msg_store_transient' is stopped 2021-03-22 13:48:09.762 [info] <0.44.0> Application rabbit exited with reason: stopped 2021-03-22 13:48:09.762 [info] <0.44.0> Application rabbit exited with reason: stopped 2021-03-22 13:48:09.762 [info] <0.1197.0> Stopping application 'rabbitmq_prelaunch' 2021-03-22 13:48:09.797 [info] <0.44.0> Application rabbitmq_prelaunch exited with reason: stopped 2021-03-22 13:48:09.797 [info] <0.1197.0> Stopping application 'rabbit_common' 2021-03-22 13:48:09.797 [info] <0.44.0> Application rabbitmq_prelaunch exited with reason: stopped 2021-03-22 13:48:09.798 [info] <0.44.0> Application rabbit_common exited with reason: stopped 2021-03-22 13:48:09.798 [info] <0.1197.0> Stopping application 'prometheus' 2021-03-22 13:48:09.798 [info] <0.44.0> Application rabbit_common exited with reason: stopped 2021-03-22 13:48:09.799 [info] <0.44.0> Application prometheus exited with reason: stopped 2021-03-22 13:48:09.799 [info] <0.1197.0> Stopping application 'sysmon_handler' 2021-03-22 13:48:09.799 [info] <0.44.0> Application prometheus exited with reason: stopped 2021-03-22 13:48:09.801 [info] <0.44.0> Application sysmon_handler exited with reason: stopped 2021-03-22 13:48:09.801 [info] <0.1197.0> Stopping application 'os_mon' 2021-03-22 13:48:09.801 [info] <0.44.0> Application sysmon_handler exited with reason: stopped 2021-03-22 13:48:09.802 [info] <0.44.0> Application os_mon exited with reason: stopped 2021-03-22 13:48:09.802 [info] <0.1197.0> Stopping application 'ra' 2021-03-22 13:48:09.802 [info] <0.44.0> Application os_mon exited with reason: stopped 2021-03-22 13:48:09.804 [info] <0.44.0> Application ra exited with reason: stopped 2021-03-22 13:48:09.804 [info] <0.1197.0> Stopping application 'mnesia' 2021-03-22 13:48:09.804 [info] <0.44.0> Application ra exited with reason: stopped 2021-03-22 13:48:09.807 [info] <0.1197.0> Successfully stopped RabbitMQ and its dependencies 2021-03-22 13:48:09.807 [info] <0.44.0> Application mnesia exited with reason: stopped 2021-03-22 13:48:09.807 [info] <0.44.0> Application mnesia exited with reason: stopped 2021-03-22 13:48:09.807 [info] <0.1197.0> Halting Erlang VM with the following applications: eldap lager observer_cli Gracefully halting Erlang VM stdout_formatter gen_batch_server aten ranch cuttlefish inets credentials_obfuscation recon jsx goldrush xmerl tools syntax_tools ssl public_key asn1 crypto compiler sasl stdlib kernel rabbitmq 13:48:11.82 INFO ==> ** RabbitMQ setup finished! ** rabbitmq 13:48:11.83 INFO ==> ** Starting RabbitMQ ** Configuring logger redirection 2021-03-22 13:48:21.307 [debug] <0.284.0> Lager installed handler error_logger_lager_h into error_logger 2021-03-22 13:48:21.316 [debug] <0.290.0> Lager installed handler lager_forwarder_backend into rabbit_log_lager_event 2021-03-22 13:48:21.316 [debug] <0.287.0> Lager installed handler lager_forwarder_backend into error_logger_lager_event 2021-03-22 13:48:21.316 [debug] <0.293.0> Lager installed handler lager_forwarder_backend into rabbit_log_channel_lager_event 2021-03-22 13:48:21.316 [debug] <0.299.0> Lager installed handler lager_forwarder_backend into rabbit_log_feature_flags_lager_event 2021-03-22 13:48:21.316 [debug] <0.296.0> Lager installed handler lager_forwarder_backend into rabbit_log_connection_lager_event 2021-03-22 13:48:21.316 [debug] <0.302.0> Lager installed handler lager_forwarder_backend into rabbit_log_federation_lager_event 2021-03-22 13:48:21.318 [debug] <0.305.0> Lager installed handler lager_forwarder_backend into rabbit_log_ldap_lager_event 2021-03-22 13:48:21.319 [debug] <0.308.0> Lager installed handler lager_forwarder_backend into rabbit_log_mirroring_lager_event 2021-03-22 13:48:21.320 [debug] <0.311.0> Lager installed handler lager_forwarder_backend into rabbit_log_prelaunch_lager_event 2021-03-22 13:48:21.322 [debug] <0.314.0> Lager installed handler lager_forwarder_backend into rabbit_log_queue_lager_event 2021-03-22 13:48:21.323 [debug] <0.317.0> Lager installed handler lager_forwarder_backend into rabbit_log_ra_lager_event 2021-03-22 13:48:21.325 [debug] <0.320.0> Lager installed handler lager_forwarder_backend into rabbit_log_shovel_lager_event 2021-03-22 13:48:21.395 [debug] <0.323.0> Lager installed handler lager_forwarder_backend into rabbit_log_upgrade_lager_event 2021-03-22 13:48:21.415 [info] <0.44.0> Application lager started on node 'rabbit@rabbitmq-0.rabbitmq-headless.sai.svc.cluster.local' 2021-03-22 13:48:21.807 [debug] <0.280.0> Lager installed handler lager_backend_throttle into lager_event 2021-03-22 13:48:24.366 [info] <0.44.0> Application mnesia started on node 'rabbit@rabbitmq-0.rabbitmq-headless.sai.svc.cluster.local' 2021-03-22 13:48:24.366 [info] <0.269.0> Starting RabbitMQ 3.8.14 on Erlang 22.3 Copyright (c) 2007-2021 VMware, Inc. or its affiliates. Licensed under the MPL 2.0. Website: https://rabbitmq.com ## ## RabbitMQ 3.8.14 ## ## ########## Copyright (c) 2007-2021 VMware, Inc. or its affiliates. ###### ## ########## Licensed under the MPL 2.0. Website: https://rabbitmq.com Doc guides: https://rabbitmq.com/documentation.html Support: https://rabbitmq.com/contact.html Tutorials: https://rabbitmq.com/getstarted.html Monitoring: https://rabbitmq.com/monitoring.html Logs: Config file(s): /opt/bitnami/rabbitmq/etc/rabbitmq/rabbitmq.conf Starting broker...2021-03-22 13:48:24.367 [info] <0.269.0> node : rabbit@rabbitmq-0.rabbitmq-headless.sai.svc.cluster.local home dir : /opt/bitnami/rabbitmq/.rabbitmq config file(s) : /opt/bitnami/rabbitmq/etc/rabbitmq/rabbitmq.conf cookie hash : RxahyJ80werAKhDjXt6lvA== log(s) : database dir : /bitnami/rabbitmq/mnesia/rabbit@rabbitmq-0.rabbitmq-headless.sai.svc.cluster.local 2021-03-22 13:48:25.515 [info] <0.269.0> Feature flags: list of feature flags found: 2021-03-22 13:48:25.515 [info] <0.269.0> Feature flags: [x] drop_unroutable_metric 2021-03-22 13:48:25.515 [info] <0.269.0> Feature flags: [x] empty_basic_get_metric 2021-03-22 13:48:25.515 [info] <0.269.0> Feature flags: [x] implicit_default_bindings 2021-03-22 13:48:25.515 [info] <0.269.0> Feature flags: [x] maintenance_mode_status 2021-03-22 13:48:25.515 [info] <0.269.0> Feature flags: [x] quorum_queue 2021-03-22 13:48:25.515 [info] <0.269.0> Feature flags: [x] user_limits 2021-03-22 13:48:25.515 [info] <0.269.0> Feature flags: [x] virtual_host_metadata 2021-03-22 13:48:25.515 [info] <0.269.0> Feature flags: feature flag states written to disk: yes 2021-03-22 13:48:25.767 [info] <0.269.0> Running boot step pre_boot defined by app rabbit 2021-03-22 13:48:25.768 [info] <0.269.0> Running boot step rabbit_core_metrics defined by app rabbit 2021-03-22 13:48:25.768 [info] <0.269.0> Running boot step rabbit_alarm defined by app rabbit 2021-03-22 13:48:25.772 [info] <0.443.0> Memory high watermark set to 409 MiB (429496729 bytes) of 1024 MiB (1073741824 bytes) total 2021-03-22 13:48:25.776 [info] <0.445.0> Enabling free disk space monitoring 2021-03-22 13:48:25.776 [info] <0.445.0> Disk free limit set to 50MB 2021-03-22 13:48:25.780 [info] <0.269.0> Running boot step code_server_cache defined by app rabbit 2021-03-22 13:48:25.780 [info] <0.269.0> Running boot step file_handle_cache defined by app rabbit 2021-03-22 13:48:25.780 [info] <0.448.0> Limiting to approx 1048479 file handles (943629 sockets) 2021-03-22 13:48:25.780 [info] <0.449.0> FHC read buffering: OFF 2021-03-22 13:48:25.780 [info] <0.449.0> FHC write buffering: ON 2021-03-22 13:48:25.781 [info] <0.269.0> Running boot step worker_pool defined by app rabbit 2021-03-22 13:48:25.781 [info] <0.377.0> Will use 8 processes for default worker pool 2021-03-22 13:48:25.781 [info] <0.377.0> Starting worker pool 'worker_pool' with 8 processes in it 2021-03-22 13:48:25.782 [info] <0.269.0> Running boot step database defined by app rabbit 2021-03-22 13:48:25.797 [info] <0.269.0> Waiting for Mnesia tables for 30000 ms, 9 retries left 2021-03-22 13:48:25.800 [info] <0.269.0> Successfully synced tables from a peer 2021-03-22 13:48:25.800 [info] <0.269.0> Waiting for Mnesia tables for 30000 ms, 9 retries left 2021-03-22 13:48:25.800 [info] <0.269.0> Successfully synced tables from a peer 2021-03-22 13:48:25.831 [info] <0.269.0> Waiting for Mnesia tables for 30000 ms, 9 retries left 2021-03-22 13:48:25.831 [info] <0.269.0> Successfully synced tables from a peer 2021-03-22 13:48:25.831 [info] <0.269.0> Will register with peer discovery backend rabbit_peer_discovery_k8s 2021-03-22 13:48:25.860 [info] <0.269.0> Running boot step database_sync defined by app rabbit 2021-03-22 13:48:25.860 [info] <0.269.0> Running boot step feature_flags defined by app rabbit 2021-03-22 13:48:25.860 [info] <0.269.0> Running boot step codec_correctness_check defined by app rabbit 2021-03-22 13:48:25.860 [info] <0.269.0> Running boot step external_infrastructure defined by app rabbit 2021-03-22 13:48:25.860 [info] <0.269.0> Running boot step rabbit_registry defined by app rabbit 2021-03-22 13:48:25.860 [info] <0.269.0> Running boot step rabbit_auth_mechanism_cr_demo defined by app rabbit 2021-03-22 13:48:25.861 [info] <0.269.0> Running boot step rabbit_queue_location_random defined by app rabbit 2021-03-22 13:48:25.861 [info] <0.269.0> Running boot step rabbit_event defined by app rabbit 2021-03-22 13:48:25.861 [info] <0.269.0> Running boot step rabbit_auth_mechanism_amqplain defined by app rabbit 2021-03-22 13:48:25.861 [info] <0.269.0> Running boot step rabbit_auth_mechanism_plain defined by app rabbit 2021-03-22 13:48:25.861 [info] <0.269.0> Running boot step rabbit_exchange_type_direct defined by app rabbit 2021-03-22 13:48:25.861 [info] <0.269.0> Running boot step rabbit_exchange_type_fanout defined by app rabbit 2021-03-22 13:48:25.861 [info] <0.269.0> Running boot step rabbit_exchange_type_headers defined by app rabbit 2021-03-22 13:48:25.862 [info] <0.269.0> Running boot step rabbit_exchange_type_topic defined by app rabbit 2021-03-22 13:48:25.862 [info] <0.269.0> Running boot step rabbit_mirror_queue_mode_all defined by app rabbit 2021-03-22 13:48:25.862 [info] <0.269.0> Running boot step rabbit_mirror_queue_mode_exactly defined by app rabbit 2021-03-22 13:48:25.862 [info] <0.269.0> Running boot step rabbit_mirror_queue_mode_nodes defined by app rabbit 2021-03-22 13:48:25.862 [info] <0.269.0> Running boot step rabbit_priority_queue defined by app rabbit 2021-03-22 13:48:25.862 [info] <0.269.0> Priority queues enabled, real BQ is rabbit_variable_queue 2021-03-22 13:48:25.862 [info] <0.269.0> Running boot step rabbit_queue_location_client_local defined by app rabbit 2021-03-22 13:48:25.862 [info] <0.269.0> Running boot step rabbit_queue_location_min_masters defined by app rabbit 2021-03-22 13:48:25.862 [info] <0.269.0> Running boot step kernel_ready defined by app rabbit 2021-03-22 13:48:25.863 [info] <0.269.0> Running boot step ldap_pool defined by app rabbitmq_auth_backend_ldap 2021-03-22 13:48:25.863 [info] <0.377.0> Starting worker pool 'ldap_pool' with 64 processes in it 2021-03-22 13:48:25.868 [info] <0.269.0> Running boot step rabbit_sysmon_minder defined by app rabbit 2021-03-22 13:48:25.868 [info] <0.269.0> Running boot step rabbit_epmd_monitor defined by app rabbit 2021-03-22 13:48:25.869 [info] <0.544.0> epmd monitor knows us, inter-node communication (distribution) port: 25672 2021-03-22 13:48:25.869 [info] <0.269.0> Running boot step guid_generator defined by app rabbit 2021-03-22 13:48:25.871 [info] <0.269.0> Running boot step rabbit_node_monitor defined by app rabbit 2021-03-22 13:48:25.871 [info] <0.548.0> Starting rabbit_node_monitor 2021-03-22 13:48:25.871 [info] <0.269.0> Running boot step delegate_sup defined by app rabbit 2021-03-22 13:48:25.873 [info] <0.269.0> Running boot step rabbit_memory_monitor defined by app rabbit 2021-03-22 13:48:25.873 [info] <0.269.0> Running boot step core_initialized defined by app rabbit 2021-03-22 13:48:25.873 [info] <0.269.0> Running boot step upgrade_queues defined by app rabbit 2021-03-22 13:48:25.900 [info] <0.269.0> Running boot step channel_tracking defined by app rabbit 2021-03-22 13:48:25.900 [info] <0.269.0> Setting up a table for channel tracking on this node: 'tracked_channel_on_node_rabbit@rabbitmq-0.rabbitmq-headless.sai.svc.cluster.local' 2021-03-22 13:48:25.900 [info] <0.269.0> Setting up a table for channel tracking on this node: 'tracked_channel_table_per_user_on_node_rabbit@rabbitmq-0.rabbitmq-headless.sai.svc.cluster.local' 2021-03-22 13:48:25.901 [info] <0.269.0> Running boot step rabbit_channel_tracking_handler defined by app rabbit 2021-03-22 13:48:25.901 [info] <0.269.0> Running boot step connection_tracking defined by app rabbit 2021-03-22 13:48:25.901 [info] <0.269.0> Setting up a table for connection tracking on this node: 'tracked_connection_on_node_rabbit@rabbitmq-0.rabbitmq-headless.sai.svc.cluster.local' 2021-03-22 13:48:25.901 [info] <0.269.0> Setting up a table for per-vhost connection counting on this node: 'tracked_connection_per_vhost_on_node_rabbit@rabbitmq-0.rabbitmq-headless.sai.svc.cluster.local' 2021-03-22 13:48:25.901 [info] <0.269.0> Setting up a table for per-user connection counting on this node: 'tracked_connection_table_per_user_on_node_rabbit@rabbitmq-0.rabbitmq-headless.sai.svc.cluster.local' 2021-03-22 13:48:25.902 [info] <0.269.0> Running boot step rabbit_connection_tracking_handler defined by app rabbit 2021-03-22 13:48:25.902 [info] <0.269.0> Running boot step rabbit_exchange_parameters defined by app rabbit 2021-03-22 13:48:25.902 [info] <0.269.0> Running boot step rabbit_mirror_queue_misc defined by app rabbit 2021-03-22 13:48:25.902 [info] <0.269.0> Running boot step rabbit_policies defined by app rabbit 2021-03-22 13:48:25.903 [info] <0.269.0> Running boot step rabbit_policy defined by app rabbit 2021-03-22 13:48:25.904 [info] <0.269.0> Running boot step rabbit_queue_location_validator defined by app rabbit 2021-03-22 13:48:25.904 [info] <0.269.0> Running boot step rabbit_quorum_memory_manager defined by app rabbit 2021-03-22 13:48:25.904 [info] <0.269.0> Running boot step rabbit_vhost_limit defined by app rabbit 2021-03-22 13:48:25.904 [info] <0.269.0> Running boot step rabbit_mgmt_reset_handler defined by app rabbitmq_management 2021-03-22 13:48:25.904 [info] <0.269.0> Running boot step rabbit_mgmt_db_handler defined by app rabbitmq_management_agent 2021-03-22 13:48:25.904 [info] <0.269.0> Management plugin: using rates mode 'basic' 2021-03-22 13:48:25.905 [info] <0.269.0> Running boot step recovery defined by app rabbit 2021-03-22 13:48:25.906 [info] <0.587.0> Making sure data directory '/bitnami/rabbitmq/mnesia/rabbit@rabbitmq-0.rabbitmq-headless.sai.svc.cluster.local/msg_stores/vhosts/628WB79CIFDYO9LJI6DKMI09L' for vhost '/' exists 2021-03-22 13:48:25.909 [info] <0.587.0> Starting message stores for vhost '/' 2021-03-22 13:48:25.909 [info] <0.591.0> Message store "628WB79CIFDYO9LJI6DKMI09L/msg_store_transient": using rabbit_msg_store_ets_index to provide index 2021-03-22 13:48:25.912 [info] <0.587.0> Started message store of type transient for vhost '/' 2021-03-22 13:48:25.912 [info] <0.595.0> Message store "628WB79CIFDYO9LJI6DKMI09L/msg_store_persistent": using rabbit_msg_store_ets_index to provide index 2021-03-22 13:48:25.914 [info] <0.587.0> Started message store of type persistent for vhost '/' 2021-03-22 13:48:25.916 [info] <0.269.0> Running boot step empty_db_check defined by app rabbit 2021-03-22 13:48:25.916 [info] <0.269.0> Will not seed default virtual host and user: have definitions to load... 2021-03-22 13:48:25.916 [info] <0.269.0> Running boot step rabbit_looking_glass defined by app rabbit 2021-03-22 13:48:25.916 [info] <0.269.0> Running boot step rabbit_core_metrics_gc defined by app rabbit 2021-03-22 13:48:25.917 [info] <0.269.0> Running boot step background_gc defined by app rabbit 2021-03-22 13:48:25.917 [info] <0.269.0> Running boot step routing_ready defined by app rabbit 2021-03-22 13:48:25.917 [info] <0.269.0> Running boot step pre_flight defined by app rabbit 2021-03-22 13:48:25.917 [info] <0.269.0> Running boot step notify_cluster defined by app rabbit 2021-03-22 13:48:25.917 [info] <0.269.0> Running boot step networking defined by app rabbit 2021-03-22 13:48:25.917 [info] <0.269.0> Running boot step definition_import_worker_pool defined by app rabbit 2021-03-22 13:48:25.917 [info] <0.377.0> Starting worker pool 'definition_import_pool' with 8 processes in it 2021-03-22 13:48:25.918 [info] <0.269.0> Running boot step cluster_name defined by app rabbit 2021-03-22 13:48:25.918 [info] <0.269.0> Running boot step direct_client defined by app rabbit 2021-03-22 13:48:25.918 [info] <0.269.0> Running boot step rabbit_management_load_definitions defined by app rabbitmq_management 2021-03-22 13:48:25.918 [info] <0.627.0> Resetting node maintenance status 2021-03-22 13:48:25.918 [info] <0.44.0> Application rabbit started on node 'rabbit@rabbitmq-0.rabbitmq-headless.sai.svc.cluster.local' 2021-03-22 13:48:25.923 [info] <0.44.0> Application rabbitmq_management_agent started on node 'rabbit@rabbitmq-0.rabbitmq-headless.sai.svc.cluster.local' 2021-03-22 13:48:25.923 [info] <0.44.0> Application cowlib started on node 'rabbit@rabbitmq-0.rabbitmq-headless.sai.svc.cluster.local' 2021-03-22 13:48:25.923 [info] <0.44.0> Application cowboy started on node 'rabbit@rabbitmq-0.rabbitmq-headless.sai.svc.cluster.local' 2021-03-22 13:48:25.924 [info] <0.44.0> Application rabbitmq_web_dispatch started on node 'rabbit@rabbitmq-0.rabbitmq-headless.sai.svc.cluster.local' 2021-03-22 13:48:25.924 [info] <0.44.0> Application amqp_client started on node 'rabbit@rabbitmq-0.rabbitmq-headless.sai.svc.cluster.local' 2021-03-22 13:48:25.956 [info] <0.686.0> Management plugin: HTTP (non-TLS) listener started on port 15672 2021-03-22 13:48:25.956 [info] <0.792.0> Statistics database started. 2021-03-22 13:48:25.956 [info] <0.791.0> Starting worker pool 'management_worker_pool' with 3 processes in it 2021-03-22 13:48:25.957 [info] <0.44.0> Application rabbitmq_management started on node 'rabbit@rabbitmq-0.rabbitmq-headless.sai.svc.cluster.local' 2021-03-22 13:48:26.000 [info] <0.44.0> Application prometheus started on node 'rabbit@rabbitmq-0.rabbitmq-headless.sai.svc.cluster.local' 2021-03-22 13:48:26.000 [info] <0.44.0> Application eldap started on node 'rabbit@rabbitmq-0.rabbitmq-headless.sai.svc.cluster.local' 2021-03-22 13:48:26.000 [warning] <0.806.0> LDAP plugin loaded, but rabbit_auth_backend_ldap is not in the list of auth_backends. LDAP auth will not work. 2021-03-22 13:48:26.000 [info] <0.44.0> Application rabbitmq_auth_backend_ldap started on node 'rabbit@rabbitmq-0.rabbitmq-headless.sai.svc.cluster.local' 2021-03-22 13:48:26.000 [info] <0.812.0> Peer discovery: enabling node cleanup (will only log warnings). Check interval: 10 seconds. 2021-03-22 13:48:26.000 [info] <0.44.0> Application rabbitmq_peer_discovery_common started on node 'rabbit@rabbitmq-0.rabbitmq-headless.sai.svc.cluster.local' 2021-03-22 13:48:26.001 [info] <0.44.0> Application rabbitmq_peer_discovery_k8s started on node 'rabbit@rabbitmq-0.rabbitmq-headless.sai.svc.cluster.local' 2021-03-22 13:48:26.005 [info] <0.820.0> Prometheus metrics: HTTP (non-TLS) listener started on port 9419 2021-03-22 13:48:26.006 [info] <0.44.0> Application rabbitmq_prometheus started on node 'rabbit@rabbitmq-0.rabbitmq-headless.sai.svc.cluster.local' 2021-03-22 13:48:26.006 [info] <0.627.0> Applying definitions from file at '/app/load_definition.json' 2021-03-22 13:48:26.006 [info] <0.627.0> Asked to import definitions. Acting user: rmq-internal 2021-03-22 13:48:26.006 [info] <0.627.0> Importing concurrently 1 users... 2021-03-22 13:48:26.008 [info] <0.618.0> Successfully changed password for user 'user' 2021-03-22 13:48:26.010 [info] <0.618.0> Successfully set user tags for user 'user' to [administrator] 2021-03-22 13:48:26.010 [info] <0.627.0> Importing concurrently 1 vhosts... 2021-03-22 13:48:26.010 [info] <0.627.0> Importing sequentially 1 policies... 2021-03-22 13:48:26.012 [info] <0.627.0> Ready to start client connection listeners 2021-03-22 13:48:26.014 [info] <0.942.0> started TCP listener on [::]:5672 2021-03-22 13:48:26.821 [info] <0.627.0> Server startup complete; 7 plugins started. * rabbitmq_prometheus * rabbitmq_peer_discovery_k8s * rabbitmq_peer_discovery_common * rabbitmq_auth_backend_ldap * rabbitmq_management * rabbitmq_web_dispatch * rabbitmq_management_agent completed with 7 plugins. 2021-03-22 13:48:26.821 [info] <0.627.0> Resetting node maintenance status 2021-03-22 13:48:46.991 [info] <0.548.0> node 'rabbit@rabbitmq-1.rabbitmq-headless.sai.svc.cluster.local' up 2021-03-22 13:48:47.578 [info] <0.548.0> rabbit on node 'rabbit@rabbitmq-1.rabbitmq-headless.sai.svc.cluster.local' up 2021-03-22 13:48:50.480 [info] <0.548.0> rabbit on node 'rabbit@rabbitmq-1.rabbitmq-headless.sai.svc.cluster.local' down 2021-03-22 13:48:50.484 [info] <0.548.0> Keeping rabbit@rabbitmq-1.rabbitmq-headless.sai.svc.cluster.local listeners: the node is already back 2021-03-22 13:48:50.583 [info] <0.548.0> node 'rabbit@rabbitmq-1.rabbitmq-headless.sai.svc.cluster.local' down: connection_closed 2021-03-22 13:48:50.978 [info] <0.1166.0> accepting AMQP connection <0.1166.0> (10.3.186.4:41784 -> 10.3.98.12:5672) 2021-03-22 13:48:51.028 [error] <0.1166.0> Error on AMQP connection <0.1166.0> (10.3.186.4:41784 -> 10.3.98.12:5672, user: 'user', state: opening): access to vhost '/' refused for user 'user' 2021-03-22 13:48:51.105 [info] <0.1172.0> accepting AMQP connection <0.1172.0> (10.3.166.13:60180 -> 10.3.98.12:5672) 2021-03-22 13:48:51.156 [error] <0.1172.0> Error on AMQP connection <0.1172.0> (10.3.166.13:60180 -> 10.3.98.12:5672, user: 'user', state: opening): access to vhost '/' refused for user 'user' 2021-03-22 13:48:51.946 [info] <0.1179.0> accepting AMQP connection <0.1179.0> (10.3.218.25:52952 -> 10.3.98.12:5672) 2021-03-22 13:48:52.208 [error] <0.1179.0> Error on AMQP connection <0.1179.0> (10.3.218.25:52952 -> 10.3.98.12:5672, user: 'user', state: opening): access to vhost '/' refused for user 'user' 2021-03-22 13:48:55.054 [info] <0.1189.0> accepting AMQP connection <0.1189.0> (10.3.250.24:34066 -> 10.3.98.12:5672) 2021-03-22 13:48:55.300 [error] <0.1189.0> Error on AMQP connection <0.1189.0> (10.3.250.24:34066 -> 10.3.98.12:5672, user: 'user', state: opening): access to vhost '/' refused for user 'user' 2021-03-22 13:48:55.886 [info] <0.1166.0> closing AMQP connection <0.1166.0> (10.3.186.4:41784 -> 10.3.98.12:5672, vhost: 'none', user: 'user') 2021-03-22 13:48:55.886 [info] <0.1195.0> Closing all channels from connection '10.3.186.4:41784 -> 10.3.98.12:5672' because it has been closed 2021-03-22 13:48:55.953 [info] <0.1172.0> closing AMQP connection <0.1172.0> (10.3.166.13:60180 -> 10.3.98.12:5672, vhost: 'none', user: 'user') 2021-03-22 13:48:55.953 [info] <0.1197.0> Closing all channels from connection '10.3.166.13:60180 -> 10.3.98.12:5672' because it has been closed 2021-03-22 13:48:56.008 [info] <0.812.0> k8s endpoint listing returned nodes not yet ready: rabbitmq-1 2021-03-22 13:48:56.008 [warning] <0.812.0> Peer discovery: node rabbit@rabbitmq-1.rabbitmq-headless.sai.svc.cluster.local is unreachable 2021-03-22 13:48:56.795 [info] <0.1179.0> closing AMQP connection <0.1179.0> (10.3.218.25:52952 -> 10.3.98.12:5672, vhost: 'none', user: 'user') 2021-03-22 13:48:56.796 [info] <0.1202.0> Closing all channels from connection '10.3.218.25:52952 -> 10.3.98.12:5672' because it has been closed 2021-03-22 13:48:59.668 [info] <0.1209.0> accepting AMQP connection <0.1209.0> (10.3.208.5:37608 -> 10.3.98.12:5672) 2021-03-22 13:48:59.672 [error] <0.1209.0> Error on AMQP connection <0.1209.0> (10.3.208.5:37608 -> 10.3.98.12:5672, user: 'user', state: opening): access to vhost '/' refused for user 'user' 2021-03-22 13:48:59.673 [info] <0.1209.0> closing AMQP connection <0.1209.0> (10.3.208.5:37608 -> 10.3.98.12:5672, vhost: 'none', user: 'user') 2021-03-22 13:48:59.673 [info] <0.1214.0> Closing all channels from connection '10.3.208.5:37608 -> 10.3.98.12:5672' because it has been closed 2021-03-22 13:48:59.814 [info] <0.1189.0> closing AMQP connection <0.1189.0> (10.3.250.24:34066 -> 10.3.98.12:5672, vhost: 'none', user: 'user') 2021-03-22 13:48:59.814 [info] <0.1216.0> Closing all channels from connection '10.3.250.24:34066 -> 10.3.98.12:5672' because it has been closed 2021-03-22 13:49:05.724 [info] <0.548.0> node 'rabbit@rabbitmq-1.rabbitmq-headless.sai.svc.cluster.local' up 2021-03-22 13:49:07.481 [info] <0.548.0> rabbit on node 'rabbit@rabbitmq-1.rabbitmq-headless.sai.svc.cluster.local' up 2021-03-22 13:49:11.060 [info] <0.1284.0> accepting AMQP connection <0.1284.0> (10.3.186.4:41952 -> 10.3.98.12:5672) 2021-03-22 13:49:11.108 [error] <0.1284.0> Error on AMQP connection <0.1284.0> (10.3.186.4:41952 -> 10.3.98.12:5672, user: 'user', state: opening): access to vhost '/' refused for user 'user' 2021-03-22 13:49:11.820 [info] <0.1290.0> accepting AMQP connection <0.1290.0> (10.3.166.13:60378 -> 10.3.98.12:5672) 2021-03-22 13:49:11.868 [error] <0.1290.0> Error on AMQP connection <0.1290.0> (10.3.166.13:60378 -> 10.3.98.12:5672, user: 'user', state: opening): access to vhost '/' refused for user 'user' 2021-03-22 13:49:15.938 [info] <0.1284.0> closing AMQP connection <0.1284.0> (10.3.186.4:41952 -> 10.3.98.12:5672, vhost: 'none', user: 'user') 2021-03-22 13:49:15.939 [info] <0.1296.0> Closing all channels from connection '10.3.186.4:41952 -> 10.3.98.12:5672' because it has been closed 2021-03-22 13:49:16.733 [info] <0.1290.0> closing AMQP connection <0.1290.0> (10.3.166.13:60378 -> 10.3.98.12:5672, vhost: 'none', user: 'user') 2021-03-22 13:49:16.735 [info] <0.1300.0> Closing all channels from connection '10.3.166.13:60378 -> 10.3.98.12:5672' because it has been closed 2021-03-22 13:49:17.583 [info] <0.1303.0> accepting AMQP connection <0.1303.0> (10.3.218.25:53252 -> 10.3.98.12:5672) 2021-03-22 13:49:17.684 [error] <0.1303.0> Error on AMQP connection <0.1303.0> (10.3.218.25:53252 -> 10.3.98.12:5672, user: 'user', state: opening): access to vhost '/' refused for user 'user' 2021-03-22 13:49:22.479 [info] <0.1303.0> closing AMQP connection <0.1303.0> (10.3.218.25:53252 -> 10.3.98.12:5672, vhost: 'none', user: 'user') 2021-03-22 13:49:22.481 [info] <0.1309.0> Closing all channels from connection '10.3.218.25:53252 -> 10.3.98.12:5672' because it has been closed 2021-03-22 13:49:22.756 [info] <0.1312.0> accepting AMQP connection <0.1312.0> (10.3.250.24:34400 -> 10.3.98.12:5672) 2021-03-22 13:49:23.096 [error] <0.1312.0> Error on AMQP connection <0.1312.0> (10.3.250.24:34400 -> 10.3.98.12:5672, user: 'user', state: opening): access to vhost '/' refused for user 'user' 2021-03-22 13:49:23.327 [info] <0.1318.0> accepting AMQP connection <0.1318.0> (10.3.202.25:60146 -> 10.3.98.12:5672) 2021-03-22 13:49:23.656 [error] <0.1318.0> Error on AMQP connection <0.1318.0> (10.3.202.25:60146 -> 10.3.98.12:5672, user: 'user', state: opening): access to vhost '/' refused for user 'user' 2021-03-22 13:49:27.510 [info] <0.1312.0> closing AMQP connection <0.1312.0> (10.3.250.24:34400 -> 10.3.98.12:5672, vhost: 'none', user: 'user') 2021-03-22 13:49:27.511 [info] <0.1327.0> Closing all channels from connection '10.3.250.24:34400 -> 10.3.98.12:5672' because it has been closed 2021-03-22 13:49:28.081 [info] <0.1318.0> closing AMQP connection <0.1318.0> (10.3.202.25:60146 -> 10.3.98.12:5672, vhost: 'none', user: 'user') 2021-03-22 13:49:28.083 [info] <0.1329.0> Closing all channels from connection '10.3.202.25:60146 -> 10.3.98.12:5672' because it has been closed ```

Password changed :(

esteban1983cl commented 3 years ago

No answer about this ?

javsalgar commented 3 years ago

Hi,

This is very strange because this time it doesn't show the couldn't change password error. It says that the password was changed without issues. I'm still unable to reproduce it, every time I delete the password works without issues.

It's true that I see this

2021-03-22 13:48:26.006 [info] <0.627.0> Applying definitions from file at '/app/load_definition.json'
2021-03-22 13:48:26.006 [info] <0.627.0> Asked to import definitions. Acting user: rmq-internal
2021-03-22 13:48:26.006 [info] <0.627.0> Importing concurrently 1 users...
2021-03-22 13:48:26.008 [info] <0.618.0> Successfully changed password for user 'user'
2021-03-22 13:48:26.010 [info] <0.618.0> Successfully set user tags for user 'user' to [administrator]

It seems that the password gets changed a second time. Anything in the definitions that could be causing this?

javsalgar commented 3 years ago

Just a note to let you know that @andresbono is working on fixing the issue. We will let you know any news on this

esteban1983cl commented 3 years ago

Just a note to let you know that @andresbono is working on fixing the issue. We will let you know any news on this

Thank you very much for all your support. I'll wait for the update.

github-actions[bot] commented 3 years ago

This Issue has been automatically marked as "stale" because it has not had recent activity (for 15 days). It will be closed if no further activity occurs. Thanks for the feedback.

javsalgar commented 3 years ago

Just a note to let you know that this should be fixed in the latest version of the chart.

esteban1983cl commented 3 years ago

Hello, It doesn't work, user and password are not set when use load definitions.

Chart values file yaml ```yaml ## Global Docker image parameters ## Please, note that this will override the image parameters, including dependencies, configured to use the global value ## Current available global Docker image parameters: imageRegistry and imagePullSecrets ## global: # imageRegistry: myRegistryName imagePullSecrets: - docker-pull-secrets # storageClass: myStorageClass ## Bitnami RabbitMQ image version ## ref: https://hub.docker.com/r/bitnami/rabbitmq/tags/ ## image: registry: docker.io repository: bitnami/rabbitmq tag: 3.8.14-debian-10-r24 ## set to true if you would like to see extra information on logs ## It turns BASH and/or NAMI debugging in the image ## debug: true ## Specify a imagePullPolicy ## Defaults to 'Always' if image tag is 'latest', else set to 'IfNotPresent' ## ref: http://kubernetes.io/docs/user-guide/images/#pre-pulling-images ## pullPolicy: IfNotPresent ## Optionally specify an array of imagePullSecrets. ## Secrets must be manually created in the namespace. ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/ ## # pullSecrets: # - myRegistryKeySecretName ## String to partially override rabbitmq.fullname template (will maintain the release name) ## nameOverride: rabbitmq ## String to fully override rabbitmq.fullname template ## fullnameOverride: rabbitmq ## Force target Kubernetes version (using Helm capabilites if not set) ## kubeVersion: ## Kubernetes Cluster Domain ## clusterDomain: cluster.local ## Deployment pod host aliases ## https://kubernetes.io/docs/concepts/services-networking/add-entries-to-pod-etc-hosts-with-host-aliases/ ## hostAliases: [] ## RabbitMQ Authentication parameters ## auth: ## RabbitMQ application username ## ref: https://github.com/bitnami/bitnami-docker-rabbitmq#environment-variables ## username: user ## RabbitMQ application password ## ref: https://github.com/bitnami/bitnami-docker-rabbitmq#environment-variables ## # password: existingPasswordSecret: rabbitmq-secrets ## Erlang cookie to determine whether different nodes are allowed to communicate with each other ## ref: https://github.com/bitnami/bitnami-docker-rabbitmq#environment-variables ## # erlangCookie: existingErlangSecret: rabbitmq-secrets ## Enable encryption to rabbitmq ## ref: https://www.rabbitmq.com/ssl.html ## tls: enabled: false failIfNoPeerCert: true sslOptionsVerify: verify_peer caCertificate: |- serverCertificate: |- serverKey: |- # existingSecret: name-of-existing-secret-to-rabbitmq existingSecretFullChain: false ## Value for the RABBITMQ_LOGS environment variable ## ref: https://www.rabbitmq.com/logging.html#log-file-location ## logs: '-' ## RabbitMQ Max File Descriptors ## ref: https://github.com/bitnami/bitnami-docker-rabbitmq#environment-variables ## ref: https://www.rabbitmq.com/install-debian.html#kernel-resource-limits ## ulimitNofiles: '65536' ## RabbitMQ maximum available scheduler threads and online scheduler threads. By default it will create a thread per CPU detected, with the following parameters you can tune it manually. ## ref: https://hamidreza-s.github.io/erlang/scheduling/real-time/preemptive/migration/2016/02/09/erlang-scheduler-details.html#scheduler-threads ## ref: https://github.com/bitnami/charts/issues/2189 ## # maxAvailableSchedulers: 2 # onlineSchedulers: 1 ## The memory threshold under which RabbitMQ will stop reading from client network sockets, in order to avoid being killed by the OS ## ref: https://www.rabbitmq.com/alarms.html ## ref: https://www.rabbitmq.com/memory.html#threshold ## memoryHighWatermark: enabled: true ## Memory high watermark type. Either absolute or relative ## type: 'relative' ## Memory high watermark value. ## The default value of 0.4 stands for 40% of available RAM ## Note: the memory relative limit is applied to the resource.limits.memory to calculate the memory threshold ## You can also use an absolute value, e.g.: 256MB ## value: 0.4 ## Plugins to enable ## plugins: 'rabbitmq_management rabbitmq_peer_discovery_k8s' ## Community plugins to download during container initialization. ## Combine it with extraPlugins to also enable them. ## # communityPlugins: ## Extra plugins to enable ## Use this instead of `plugins` to add new plugins ## extraPlugins: 'rabbitmq_auth_backend_ldap' ## Clustering settings ## clustering: enabled: true addressType: hostname ## Rebalance master for queues in cluster when new replica is created ## ref: https://www.rabbitmq.com/rabbitmq-queues.8.html#rebalance ## rebalance: true ## forceBoot: executes 'rabbitmqctl force_boot' to force boot cluster shut down unexpectedly in an ## unknown order. ## ref: https://www.rabbitmq.com/rabbitmqctl.8.html#force_boot ## forceBoot: false ## Loading a RabbitMQ definitions file to configure RabbitMQ ## loadDefinition: enabled: true ## Can be templated if needed, e.g. ## existingSecret: "{{ .Release.Name }}-load-definition" ## existingSecret: rabbitmq-load-definitions ## Command and args for running the container (set to default if not set). Use array form ## # command: # args: ## Default duration in seconds k8s waits for container to exit before sending kill signal. Any time in excess of ## 10 seconds will be spent waiting for any synchronization necessary for cluster not to lose data. ## terminationGracePeriodSeconds: 120 ## Additional environment variables to set ## E.g: ## extraEnvVars: ## - name: FOO ## value: BAR ## extraEnvVars: [] ## ConfigMap with extra environment variables ## # extraEnvVarsCM: ## Secret with extra environment variables ## # extraEnvVarsSecret: ## Extra ports to be included in container spec, primarily informational ## E.g: ## extraContainerPorts: ## - name: new_port_name ## containerPort: 1234 ## extraContainerPorts: [] ## Configuration file content: required cluster configuration ## Do not override unless you know what you are doing. ## To add more configuration, use `extraConfiguration` of `advancedConfiguration` instead ## configuration: |- {{- if not .Values.loadDefinition.enabled -}} ## Username and password ## default_user = {{ .Values.auth.username }} default_pass = CHANGEME {{- end }} {{- if .Values.clustering.enabled }} ## Clustering ## cluster_formation.peer_discovery_backend = rabbit_peer_discovery_k8s cluster_formation.k8s.host = kubernetes.default.svc.{{ .Values.clusterDomain }} cluster_formation.node_cleanup.interval = 10 cluster_formation.node_cleanup.only_log_warning = true cluster_partition_handling = autoheal {{- end }} # queue master locator queue_master_locator = min-masters # enable guest user loopback_users.guest = false {{ tpl .Values.extraConfiguration . }} {{- if .Values.auth.tls.enabled }} ssl_options.verify = {{ .Values.auth.tls.sslOptionsVerify }} listeners.ssl.default = {{ .Values.service.tlsPort }} ssl_options.fail_if_no_peer_cert = {{ .Values.auth.tls.failIfNoPeerCert }} ssl_options.cacertfile = /opt/bitnami/rabbitmq/certs/ca_certificate.pem ssl_options.certfile = /opt/bitnami/rabbitmq/certs/server_certificate.pem ssl_options.keyfile = /opt/bitnami/rabbitmq/certs/server_key.pem {{- end }} {{- if .Values.ldap.enabled }} auth_backends.1 = rabbit_auth_backend_ldap auth_backends.2 = internal {{- range $index, $server := .Values.ldap.servers }} auth_ldap.servers.{{ add $index 1 }} = {{ $server }} {{- end }} auth_ldap.port = {{ .Values.ldap.port }} auth_ldap.user_dn_pattern = {{ .Values.ldap.user_dn_pattern }} {{- if .Values.ldap.tls.enabled }} auth_ldap.use_ssl = true {{- end }} {{- end }} {{- if .Values.metrics.enabled }} ## Prometheus metrics ## prometheus.tcp.port = 9419 {{- end }} {{- if .Values.memoryHighWatermark.enabled }} ## Memory Threshold ## total_memory_available_override_value = {{ include "rabbitmq.toBytes" .Values.resources.limits.memory }} vm_memory_high_watermark.{{ .Values.memoryHighWatermark.type }} = {{ .Values.memoryHighWatermark.value }} {{- end }} ## Configuration file content: extra configuration ## Use this instead of `configuration` to add more configuration ## extraConfiguration: |- #default_vhost = {{ .Release.Namespace }}-vhost #disk_free_limit.absolute = 50MB load_definitions = /app/load_definition.json ## Configuration file content: advanced configuration ## Use this as additional configuration in classic config format (Erlang term configuration format) ## ## If you set LDAP with TLS/SSL enabled and you are using self-signed certificates, uncomment these lines. ## advancedConfiguration: |- ## [{ ## rabbitmq_auth_backend_ldap, ## [{ ## ssl_options, ## [{ ## verify, verify_none ## }, { ## fail_if_no_peer_cert, ## false ## }] ## ]} ## }]. ## advancedConfiguration: |- ## LDAP configuration ## ldap: enabled: false ## List of LDAP servers hostnames ## servers: [] ## LDAP servers port ## port: '389' ## Pattern used to translate the provided username into a value to be used for the LDAP bind ## ref: https://www.rabbitmq.com/ldap.html#usernames-and-dns ## user_dn_pattern: cn=${username},dc=example,dc=org tls: ## If you enabled TLS/SSL you can set advaced options using the advancedConfiguration parameter. ## enabled: false ## extraVolumes and extraVolumeMounts allows you to mount other volumes ## Examples: ## extraVolumeMounts: ## - name: extras ## mountPath: /usr/share/extras ## readOnly: true ## extraVolumes: ## - name: extras ## emptyDir: {} ## extraVolumeMounts: [] extraVolumes: [] ## Optionally specify extra secrets to be created by the chart. ## This can be useful when combined with load_definitions to automatically create the secret containing the definitions to be loaded. ## Example: ## extraSecrets: ## load-definition: ## load_definition.json: | ## { ## ... ## } ## ## Set this flag to true if extraSecrets should be created with prepended. ## extraSecretsPrependReleaseName: false extraSecrets: rabbitmq-load-definitions: load_definition.json: | { "permissions": [ { "user": "user", "vhost": "/", "configure": ".*", "write": ".*", "read": ".*" } ], "vhosts": [ { "name": "/" } ] } ## Number of RabbitMQ replicas to deploy ## replicaCount: 3 ## Use an alternate scheduler, e.g. "stork". ## ref: https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/ ## # schedulerName: ## RabbitMQ should be initialized one by one when building cluster for the first time. ## Therefore, the default value of podManagementPolicy is 'OrderedReady' ## Once the RabbitMQ participates in the cluster, it waits for a response from another ## RabbitMQ in the same cluster at reboot, except the last RabbitMQ of the same cluster. ## If the cluster exits gracefully, you do not need to change the podManagementPolicy ## because the first RabbitMQ of the statefulset always will be last of the cluster. ## However if the last RabbitMQ of the cluster is not the first RabbitMQ due to a failure, ## you must change podManagementPolicy to 'Parallel'. ## ref : https://www.rabbitmq.com/clustering.html#restarting ## podManagementPolicy: OrderedReady ## Pod labels. Evaluated as a template ## Ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/ ## podLabels: k8s-app: rabbitmq-sai ## Pod annotations. Evaluated as a template ## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/ ## podAnnotations: {} ## updateStrategy for RabbitMQ statefulset ## ref: https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#update-strategies ## updateStrategyType: RollingUpdate ## Statefulset labels. Evaluated as a template ## Ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/ ## statefulsetLabels: {} ## Name of the priority class to be used by RabbitMQ pods, priority class needs to be created beforehand ## Ref: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/ ## priorityClassName: '' ## Pod affinity preset ## ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity ## Allowed values: soft, hard ## podAffinityPreset: "" ## Pod anti-affinity preset ## Ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity ## Allowed values: soft, hard ## podAntiAffinityPreset: "soft" ## Node affinity preset ## Ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity ## Allowed values: soft, hard ## nodeAffinityPreset: ## Node affinity type ## Allowed values: soft, hard ## type: "" ## Node label key to match ## E.g. ## key: "kubernetes.io/e2e-az-name" ## key: "" ## Node label values to match ## E.g. ## values: ## - e2e-az1 ## - e2e-az2 ## values: [] ## Affinity for pod assignment. Evaluated as a template ## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity ## Note: podAffinityPreset, podAntiAffinityPreset, and nodeAffinityPreset will be ignored when it's set ## affinity: {} ## Node labels for pod assignment. Evaluated as a template ## ref: https://kubernetes.io/docs/user-guide/node-selection/ ## nodeSelector: node.kubernetes.io/role: "user" ## Tolerations for pod assignment. Evaluated as a template ## Ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/ ## tolerations: [] ## Topology Spread Constraints for pod assignment spread across your cluster among failure-domains. Evaluated as a template ## Ref: https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/#spread-constraints-for-pods ## topologySpreadConstraints: {} ## RabbitMQ pods' Security Context ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-pod ## podSecurityContext: enabled: true fsGroup: 1001 runAsUser: 1001 ## RabbitMQ containers' Security Context ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-container ## Example: ## containerSecurityContext: ## capabilities: ## drop: ["NET_RAW"] ## readOnlyRootFilesystem: true ## containerSecurityContext: {} ## RabbitMQ containers' resource requests and limits ## ref: http://kubernetes.io/docs/user-guide/compute-resources/ ## resources: # We usually recommend not to specify default resources and to leave this as a conscious # choice for the user. This also increases chances charts run on environments with little # resources, such as Minikube. If you do want to specify resources, uncomment the following # lines, adjust them as necessary, and remove the curly braces after 'resources:'. limits: cpu: 1000m memory: 1Gi requests: cpu: 1000m memory: 500Mi ## RabbitMQ containers' liveness and readiness probes. ## ref: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#container-probes ## livenessProbe: enabled: true initialDelaySeconds: 120 timeoutSeconds: 20 periodSeconds: 30 failureThreshold: 6 successThreshold: 1 readinessProbe: enabled: true initialDelaySeconds: 10 timeoutSeconds: 20 periodSeconds: 30 failureThreshold: 3 successThreshold: 1 ## Custom Liveness probe ## customLivenessProbe: {} ## Custom Rediness probe ## customReadinessProbe: {} ## Custom Startup probe ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-startup-probes ## customStartupProbe: {} ## Add init containers to the pod ## Example: ## initContainers: ## - name: your-image-name ## image: your-image ## imagePullPolicy: Always ## ports: ## - name: portname ## containerPort: 1234 ## initContainers: {} ## Add sidecars to the pod. ## Example: ## sidecars: ## - name: your-image-name ## image: your-image ## imagePullPolicy: Always ## ports: ## - name: portname ## containerPort: 1234 ## sidecars: {} ## RabbitMQ pods ServiceAccount ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/ ## serviceAccount: ## Specifies whether a ServiceAccount should be created ## create: true ## The name of the ServiceAccount to use. ## If not set and create is true, a name is generated using the rabbitmq.fullname template ## # name: ## Role Based Access ## ref: https://kubernetes.io/docs/admin/authorization/rbac/ ## rbac: ## Whether RBAC rules should be created ## binding RabbitMQ ServiceAccount to a role ## that allows RabbitMQ pods querying the K8s API ## create: true persistence: ## this enables PVC templates that will create one per pod ## enabled: false ## rabbitmq data Persistent Volume Storage Class ## If defined, storageClassName: ## If set to "-", storageClassName: "", which disables dynamic provisioning ## If undefined (the default) or set to null, no storageClassName spec is ## set, choosing the default provisioner. (gp2 on AWS, standard on ## GKE, AWS & OpenStack) ## storageClass: "gp2" ## selector can be used to match an existing PersistentVolume ## selector: ## matchLabels: ## app: my-app selector: matchLabels: type: rabbitmq-pvc accessMode: ReadWriteOnce ## Existing PersistentVolumeClaims ## The value is evaluated as a template ## So, for example, the name can depend on .Release or .Chart # existingClaim: "" ## If you change this value, you might have to adjust `rabbitmq.diskFreeLimit` as well. ## size: 8Gi volumes: # - name: volume_name # emptyDir: {} ## Pod Disruption Budget configuration ## ref: https://kubernetes.io/docs/tasks/run-application/configure-pdb/ ## pdb: create: false ## Min number of pods that must still be available after the eviction ## minAvailable: 1 ## Max number of pods that can be unavailable after the eviction ## # maxUnavailable: 1 ## Network Policy configuration ## ref: https://kubernetes.io/docs/concepts/services-networking/network-policies/ ## networkPolicy: ## Enable creation of NetworkPolicy resources ## enabled: false ## The Policy model to apply. When set to false, only pods with the correct ## client label will have network access to the ports RabbitMQ is listening ## on. When true, RabbitMQ will accept connections from any source ## (with the correct destination port). ## allowExternal: true ## Additional NetworkPolicy Ingress "from" rules to set. Note that all rules are OR-ed. ## # additionalRules: # - matchLabels: # - role: frontend # - matchExpressions: # - key: role # operator: In # values: # - frontend ## Kubernetes service type ## service: type: ClusterIP ## Amqp port ## ref: https://github.com/bitnami/bitnami-docker-rabbitmq#environment-variables ## port: 5672 ## Amqp service port name ## portName: amqp ## Amqp Tls port ## tlsPort: 5671 ## Amqp Tls service port name ## tlsPortName: amqp-ssl ## Node port ## ref: https://github.com/bitnami/bitnami-docker-rabbitmq#environment-variables ## # nodePort: 30672 ## Node port Tls ## # tlsNodePort: 30671 ## Dist port ## ref: https://github.com/bitnami/bitnami-docker-rabbitmq#environment-variables ## distPort: 25672 ## Dist service port name ## distPortName: dist ## Node port (Manager) ## # distNodePort: 30676 ## RabbitMQ Manager port ## ref: https://github.com/bitnami/bitnami-docker-rabbitmq#environment-variables ## managerPortEnabled: true managerPort: 15672 ## RabbitMQ Manager service port name ## managerPortName: http-stats ## Node port (Manager) ## # managerNodePort: 30673 ## RabbitMQ Prometheues metrics port ## metricsPort: 9419 ## RabbitMQ Prometheues metrics service port name ## metricsPortName: metrics ## Node port for metrics ## # metricsNodePort: 30674 ## Node port for EPMD Discovery ## # epmdNodePort: 30675 ## Service port name for EPMD Discovery ## epmdPortName: epmd ## Extra ports to expose ## E.g.: ## extraPorts: ## - name: new_svc_name ## port: 1234 ## targetPort: 1234 ## extraPorts: [] ## Load Balancer sources ## https://kubernetes.io/docs/tasks/access-application-cluster/configure-cloud-provider-firewall/#restrict-access-for-loadbalancer-service ## # loadBalancerSourceRanges: # - 10.10.10.0/24 ## Set the ExternalIPs ## # externalIPs: ## Enable client source IP preservation ## ref http://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ip ## externalTrafficPolicy: Cluster ## Set the LoadBalancerIP ## # loadBalancerIP: ## Service labels. Evaluated as a template ## labels: {} ## Service annotations. Evaluated as a template ## Example: ## annotations: ## service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0 ## annotations: {} ## Headless Service annotations. Evaluated as a template ## Example: ## annotations: ## external-dns.alpha.kubernetes.io/internal-hostname: rabbitmq.example.com ## annotationsHeadless: {} ## Configure the ingress resource that allows you to access the ## RabbitMQ installation. Set up the URL ## ref: http://kubernetes.io/docs/user-guide/ingress/ ## ingress: ## Set to true to enable ingress record generation ## enabled: true ## Path for the default host. You may need to set this to '/*' in order to use this ## with ALB ingress controllers. ## path: / ## Ingress Path type ## pathType: ImplementationSpecific ## Set this to true in order to add the corresponding annotations for cert-manager ## certManager: false ## When the ingress is enabled, a host pointing to this will be created ## hostname: rabbitmq.example.com ## Ingress annotations done as key:value pairs ## For a full list of possible ingress annotations, please see ## ref: https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/annotations.md ## ## If certManager is set to true, annotation kubernetes.io/tls-acme: "true" will automatically be set ## annotations: kubernetes.io/ingress.class: nginx-http-private external-dns.alpha.kubernetes.io/set-identifier: green external-dns.alpha.kubernetes.io/aws-weight: "1" ## Enable TLS configuration for the hostname defined at ingress.hostname parameter ## TLS certificates will be retrieved from a TLS secret with name: {{- printf "%s-tls" .Values.ingress.hostname }} ## or a custom one if you use the tls.existingSecret parameter ## You can use the ingress.secrets parameter to create this TLS secret or relay on cert-manager to create it ## tls: false ## existingSecret: name-of-existing-secret ## ## The list of additional hostnames to be covered with this ingress record. ## Most likely the hostname above will be enough, but in the event more hosts are needed, this is an array ## extraHosts: ## - name: rabbitmq.local ## path: / ## ## The tls configuration for additional hostnames to be covered with this ingress record. ## see: https://kubernetes.io/docs/concepts/services-networking/ingress/#tls ## extraTls: ## - hosts: ## - rabbitmq.local ## secretName: rabbitmq.local-tls ## ## If you're providing your own certificates, please use this to add the certificates as secrets ## key and certificate should start with -----BEGIN CERTIFICATE----- or ## -----BEGIN RSA PRIVATE KEY----- ## ## name should line up with a tlsSecret set further up ## If you're using cert-manager, this is unneeded, as it will create the secret for you if it is not set ## ## It is also possible to create and manage the certificates outside of this helm chart ## Please see README.md for more information ## secrets: [] ## - name: rabbitmq.local-tls ## key: ## certificate: ## ## Prometheus Metrics ## metrics: enabled: true plugins: 'rabbitmq_prometheus' ## Prometheus pod annotations ## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/ ## podAnnotations: prometheus.io/scrape: 'true' prometheus.io/port: '{{ .Values.service.metricsPort }}' ## Prometheus Service Monitor ## ref: https://github.com/coreos/prometheus-operator ## serviceMonitor: ## If the operator is installed in your cluster, set to true to create a Service Monitor Entry ## enabled: true ## Specify the namespace in which the serviceMonitor resource will be created ## namespace: "monitoring" ## Specify the interval at which metrics should be scraped ## interval: 30s ## Specify the timeout after which the scrape is ended ## # scrapeTimeout: 30s ## Specify Metric Relabellings to add to the scrape endpoint ## # relabellings: ## Specify honorLabels parameter to add the scrape endpoint ## honorLabels: false ## Used to pass Labels that are used by the Prometheus installed in your cluster to select Service Monitors to work with ## ref: https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#prometheusspec ## additionalLabels: release: kube-prometheus-stack ## Custom PrometheusRule to be defined ## The value is evaluated as a template, so, for example, the value can depend on .Release or .Chart ## ref: https://github.com/coreos/prometheus-operator#customresourcedefinitions ## prometheusRule: enabled: true additionalLabels: release: kube-prometheus-stack namespace: 'monitoring' ## List of rules, used as template by Helm. ## These are just examples rules inspired from https://awesome-prometheus-alerts.grep.to/rules.html rules: - alert: RabbitmqDown expr: rabbitmq_up{service="{{ template "rabbitmq.fullname" . }}"} == 0 for: 5m labels: severity: error annotations: summary: Rabbitmq down (instance {{ "{{ $labels.instance }}" }}) description: RabbitMQ node down - alert: ClusterDown expr: | sum(rabbitmq_running{service="{{ template "rabbitmq.fullname" . }}"}) < {{ .Values.replicaCount }} for: 5m labels: severity: error annotations: summary: Cluster down (instance {{ "{{ $labels.instance }}" }}) description: | Less than {{ .Values.replicaCount }} nodes running in RabbitMQ cluster VALUE = {{ "{{ $value }}" }} - alert: ClusterPartition expr: rabbitmq_partitions{service="{{ template "rabbitmq.fullname" . }}"} > 0 for: 5m labels: severity: error annotations: summary: Cluster partition (instance {{ "{{ $labels.instance }}" }}) description: | Cluster partition VALUE = {{ "{{ $value }}" }} - alert: OutOfMemory expr: | rabbitmq_node_mem_used{service="{{ template "rabbitmq.fullname" . }}"} / rabbitmq_node_mem_limit{service="{{ template "rabbitmq.fullname" . }}"} * 100 > 90 for: 5m labels: severity: warning annotations: summary: Out of memory (instance {{ "{{ $labels.instance }}" }}) description: | Memory available for RabbmitMQ is low (< 10%)\n VALUE = {{ "{{ $value }}" }} LABELS: {{ "{{ $labels }}" }} - alert: TooManyConnections expr: rabbitmq_connectionsTotal{service="{{ template "rabbitmq.fullname" . }}"} > 1000 for: 5m labels: severity: warning annotations: summary: Too many connections (instance {{ "{{ $labels.instance }}" }}) description: | RabbitMQ instance has too many connections (> 1000) VALUE = {{ "{{ $value }}" }}\n LABELS: {{ "{{ $labels }}" }} #rules: [] ## Init Container parameters ## Change the owner and group of the persistent volume(s) mountpoint(s) to 'runAsUser:fsGroup' on each component ## values from the securityContext section of the component ## volumePermissions: enabled: false image: registry: docker.io repository: bitnami/bitnami-shell tag: "10" ## Specify a imagePullPolicy ## Defaults to 'Always' if image tag is 'latest', else set to 'IfNotPresent' ## ref: http://kubernetes.io/docs/user-guide/images/#pre-pulling-images ## pullPolicy: IfNotPresent ## Optionally specify an array of imagePullSecrets (secrets must be manually created in the namespace) ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/ ## Example: ## pullSecrets: ## - myRegistryKeySecretName ## pullSecrets: - docker-pull-secrets ## Init Container resource requests and limits ## ref: http://kubernetes.io/docs/user-guide/compute-resources/ ## resources: # We usually recommend not to specify default resources and to leave this as a conscious # choice for the user. This also increases chances charts run on environments with little # resources, such as Minikube. If you do want to specify resources, uncomment the following # lines, adjust them as necessary, and remove the curly braces after 'resources:'. limits: {} # cpu: 100m # memory: 128Mi requests: {} # cpu: 100m # memory: 128Mi ```
javsalgar commented 3 years ago

Hi,

In that case you should set the user and password in the load definitions file. Did you try that?

github-actions[bot] commented 3 years ago

This Issue has been automatically marked as "stale" because it has not had recent activity (for 15 days). It will be closed if no further activity occurs. Thanks for the feedback.

esteban1983cl commented 3 years ago

Hello everyone, I solve this issue using the following configuration:

I specified secrets to manage passwords and loadDefinitions:

auth:
  username: user
  existingPasswordSecret: rabbitmq-secrets
  existingErlangSecret: rabbitmq-secrets
loadDefinition:
  enabled: true
  existingSecret: rabbitmq-load-definitions
load_definition.json
{
  "vhosts": [
    {
      "name": "/"
    }
  ],
  "users": [
    {
        "name": "user",
        "password": "xxxxxxxxxxx",
        "tags": "administrator"
    }
  ],
  "policies": [
    {
      "name": "ha-all",
      "pattern": ".*\..*",
      "vhost": "/",
      "definition": {
        "ha-mode": "all"
      }
    }
  ],
  "permissions": [
    {
      "user": "user",
      "vhost": "/",
      "configure": ".*",
      "write": ".*",
      "read": ".*"
    }
  ]
}

Pickup my values.yaml for chart version 8.11.9

rabbitmq chart value file ```yaml ## Global Docker image parameters ## Please, note that this will override the image parameters, including dependencies, configured to use the global value ## Current available global Docker image parameters: imageRegistry and imagePullSecrets ## global: # imageRegistry: myRegistryName # imagePullSecrets: # - docker-pull-secrets # storageClass: myStorageClass ## Bitnami RabbitMQ image version ## ref: https://hub.docker.com/r/bitnami/rabbitmq/tags/ ## image: registry: docker.io repository: bitnami/rabbitmq tag: 3.8.14-debian-10-r24 ## set to true if you would like to see extra information on logs ## It turns BASH and/or NAMI debugging in the image ## debug: true ## Specify a imagePullPolicy ## Defaults to 'Always' if image tag is 'latest', else set to 'IfNotPresent' ## ref: http://kubernetes.io/docs/user-guide/images/#pre-pulling-images ## pullPolicy: IfNotPresent ## Optionally specify an array of imagePullSecrets. ## Secrets must be manually created in the namespace. ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/ ## # pullSecrets: # - myRegistryKeySecretName ## String to partially override rabbitmq.fullname template (will maintain the release name) ## nameOverride: rabbitmq ## String to fully override rabbitmq.fullname template ## fullnameOverride: rabbitmq ## Force target Kubernetes version (using Helm capabilites if not set) ## kubeVersion: ## Kubernetes Cluster Domain ## clusterDomain: cluster.local ## Deployment pod host aliases ## https://kubernetes.io/docs/concepts/services-networking/add-entries-to-pod-etc-hosts-with-host-aliases/ ## hostAliases: [] ## RabbitMQ Authentication parameters ## auth: ## RabbitMQ application username ## ref: https://github.com/bitnami/bitnami-docker-rabbitmq#environment-variables ## username: user ## RabbitMQ application password ## ref: https://github.com/bitnami/bitnami-docker-rabbitmq#environment-variables ## # password: existingPasswordSecret: rabbitmq-secrets ## Erlang cookie to determine whether different nodes are allowed to communicate with each other ## ref: https://github.com/bitnami/bitnami-docker-rabbitmq#environment-variables ## # erlangCookie: existingErlangSecret: rabbitmq-secrets ## Enable encryption to rabbitmq ## ref: https://www.rabbitmq.com/ssl.html ## tls: enabled: false failIfNoPeerCert: true sslOptionsVerify: verify_peer caCertificate: |- serverCertificate: |- serverKey: |- # existingSecret: name-of-existing-secret-to-rabbitmq existingSecretFullChain: false ## Value for the RABBITMQ_LOGS environment variable ## ref: https://www.rabbitmq.com/logging.html#log-file-location ## logs: '-' ## RabbitMQ Max File Descriptors ## ref: https://github.com/bitnami/bitnami-docker-rabbitmq#environment-variables ## ref: https://www.rabbitmq.com/install-debian.html#kernel-resource-limits ## ulimitNofiles: '65536' ## RabbitMQ maximum available scheduler threads and online scheduler threads. By default it will create a thread per CPU detected, with the following parameters you can tune it manually. ## ref: https://hamidreza-s.github.io/erlang/scheduling/real-time/preemptive/migration/2016/02/09/erlang-scheduler-details.html#scheduler-threads ## ref: https://github.com/bitnami/charts/issues/2189 ## # maxAvailableSchedulers: 2 # onlineSchedulers: 1 ## The memory threshold under which RabbitMQ will stop reading from client network sockets, in order to avoid being killed by the OS ## ref: https://www.rabbitmq.com/alarms.html ## ref: https://www.rabbitmq.com/memory.html#threshold ## memoryHighWatermark: enabled: true ## Memory high watermark type. Either absolute or relative ## type: 'relative' ## Memory high watermark value. ## The default value of 0.4 stands for 40% of available RAM ## Note: the memory relative limit is applied to the resource.limits.memory to calculate the memory threshold ## You can also use an absolute value, e.g.: 256MB ## value: 0.4 ## Plugins to enable ## plugins: 'rabbitmq_management rabbitmq_peer_discovery_k8s' ## Community plugins to download during container initialization. ## Combine it with extraPlugins to also enable them. ## # communityPlugins: ## Extra plugins to enable ## Use this instead of `plugins` to add new plugins ## extraPlugins: 'rabbitmq_auth_backend_ldap' ## Clustering settings ## clustering: enabled: true addressType: hostname ## Rebalance master for queues in cluster when new replica is created ## ref: https://www.rabbitmq.com/rabbitmq-queues.8.html#rebalance ## rebalance: true ## forceBoot: executes 'rabbitmqctl force_boot' to force boot cluster shut down unexpectedly in an ## unknown order. ## ref: https://www.rabbitmq.com/rabbitmqctl.8.html#force_boot ## forceBoot: false ## Loading a RabbitMQ definitions file to configure RabbitMQ ## loadDefinition: enabled: true ## Can be templated if needed, e.g. ## existingSecret: "{{ .Release.Name }}-load-definition" ## existingSecret: rabbitmq-load-definitions ## Command and args for running the container (set to default if not set). Use array form ## # command: # args: ## Default duration in seconds k8s waits for container to exit before sending kill signal. Any time in excess of ## 10 seconds will be spent waiting for any synchronization necessary for cluster not to lose data. ## terminationGracePeriodSeconds: 120 ## Additional environment variables to set ## E.g: ## extraEnvVars: ## - name: FOO ## value: BAR ## extraEnvVars: [] ## ConfigMap with extra environment variables ## # extraEnvVarsCM: ## Secret with extra environment variables ## # extraEnvVarsSecret: ## Extra ports to be included in container spec, primarily informational ## E.g: ## extraContainerPorts: ## - name: new_port_name ## containerPort: 1234 ## extraContainerPorts: [] ## Configuration file content: required cluster configuration ## Do not override unless you know what you are doing. ## To add more configuration, use `extraConfiguration` of `advancedConfiguration` instead ## configuration: |- {{- if not .Values.loadDefinition.enabled -}} ## Username and password ## default_user = {{ .Values.auth.username }} default_pass = CHANGEME {{- end }} {{- if .Values.clustering.enabled }} ## Clustering ## cluster_formation.peer_discovery_backend = rabbit_peer_discovery_k8s cluster_formation.k8s.host = kubernetes.default.svc.{{ .Values.clusterDomain }} cluster_formation.node_cleanup.interval = 10 cluster_formation.node_cleanup.only_log_warning = true cluster_partition_handling = autoheal {{- end }} # queue master locator queue_master_locator = min-masters # enable guest user loopback_users.guest = false {{ tpl .Values.extraConfiguration . }} {{- if .Values.auth.tls.enabled }} ssl_options.verify = {{ .Values.auth.tls.sslOptionsVerify }} listeners.ssl.default = {{ .Values.service.tlsPort }} ssl_options.fail_if_no_peer_cert = {{ .Values.auth.tls.failIfNoPeerCert }} ssl_options.cacertfile = /opt/bitnami/rabbitmq/certs/ca_certificate.pem ssl_options.certfile = /opt/bitnami/rabbitmq/certs/server_certificate.pem ssl_options.keyfile = /opt/bitnami/rabbitmq/certs/server_key.pem {{- end }} {{- if .Values.ldap.enabled }} auth_backends.1 = rabbit_auth_backend_ldap auth_backends.2 = internal {{- range $index, $server := .Values.ldap.servers }} auth_ldap.servers.{{ add $index 1 }} = {{ $server }} {{- end }} auth_ldap.port = {{ .Values.ldap.port }} auth_ldap.user_dn_pattern = {{ .Values.ldap.user_dn_pattern }} {{- if .Values.ldap.tls.enabled }} auth_ldap.use_ssl = true {{- end }} {{- end }} {{- if .Values.metrics.enabled }} ## Prometheus metrics ## prometheus.tcp.port = 9419 {{- end }} {{- if .Values.memoryHighWatermark.enabled }} ## Memory Threshold ## total_memory_available_override_value = {{ include "rabbitmq.toBytes" .Values.resources.limits.memory }} vm_memory_high_watermark.{{ .Values.memoryHighWatermark.type }} = {{ .Values.memoryHighWatermark.value }} {{- end }} ## Configuration file content: extra configuration ## Use this instead of `configuration` to add more configuration ## extraConfiguration: |- #default_vhost = {{ .Release.Namespace }}-vhost #disk_free_limit.absolute = 50MB load_definitions = /app/load_definition.json ## Configuration file content: advanced configuration ## Use this as additional configuration in classic config format (Erlang term configuration format) ## ## If you set LDAP with TLS/SSL enabled and you are using self-signed certificates, uncomment these lines. ## advancedConfiguration: |- ## [{ ## rabbitmq_auth_backend_ldap, ## [{ ## ssl_options, ## [{ ## verify, verify_none ## }, { ## fail_if_no_peer_cert, ## false ## }] ## ]} ## }]. ## advancedConfiguration: |- ## LDAP configuration ## ldap: enabled: false ## List of LDAP servers hostnames ## servers: [] ## LDAP servers port ## port: '389' ## Pattern used to translate the provided username into a value to be used for the LDAP bind ## ref: https://www.rabbitmq.com/ldap.html#usernames-and-dns ## user_dn_pattern: cn=${username},dc=example,dc=org tls: ## If you enabled TLS/SSL you can set advaced options using the advancedConfiguration parameter. ## enabled: false ## extraVolumes and extraVolumeMounts allows you to mount other volumes ## Examples: ## extraVolumeMounts: ## - name: extras ## mountPath: /usr/share/extras ## readOnly: true ## extraVolumes: ## - name: extras ## emptyDir: {} ## #extraVolumeMounts: # - name: load-definitions-volume-test # mountPath: /app # #extraVolumes: # - name: load-definitions-volume-test # secret: # secretName: rabbitmq-load-definitions ## Optionally specify extra secrets to be created by the chart. ## This can be useful when combined with load_definitions to automatically create the secret containing the definitions to be loaded. ## Example: ## extraSecrets: ## load-definition: ## load_definition.json: | ## { ## ... ## } ## ## Set this flag to true if extraSecrets should be created with prepended. ## extraSecretsPrependReleaseName: false extraSecrets: {} # rabbitmq-load-definitions: # load_definition.json: | # { # "permissions": [ # { # "user": "user", # "vhost": "/", # "configure": ".*", # "write": ".*", # "read": ".*" # } # ], # "vhosts": [ # { # "name": "/" # } # ] # } ## Number of RabbitMQ replicas to deploy ## replicaCount: 3 ## Use an alternate scheduler, e.g. "stork". ## ref: https://kubernetes.io/docs/tasks/administer-cluster/configure-multiple-schedulers/ ## # schedulerName: ## RabbitMQ should be initialized one by one when building cluster for the first time. ## Therefore, the default value of podManagementPolicy is 'OrderedReady' ## Once the RabbitMQ participates in the cluster, it waits for a response from another ## RabbitMQ in the same cluster at reboot, except the last RabbitMQ of the same cluster. ## If the cluster exits gracefully, you do not need to change the podManagementPolicy ## because the first RabbitMQ of the statefulset always will be last of the cluster. ## However if the last RabbitMQ of the cluster is not the first RabbitMQ due to a failure, ## you must change podManagementPolicy to 'Parallel'. ## ref : https://www.rabbitmq.com/clustering.html#restarting ## podManagementPolicy: OrderedReady ## Pod labels. Evaluated as a template ## Ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/ ## podLabels: k8s-app: rabbitmq-sai ## Pod annotations. Evaluated as a template ## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/ ## podAnnotations: {} ## updateStrategy for RabbitMQ statefulset ## ref: https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#update-strategies ## updateStrategyType: RollingUpdate ## Statefulset labels. Evaluated as a template ## Ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/ ## statefulsetLabels: {} ## Name of the priority class to be used by RabbitMQ pods, priority class needs to be created beforehand ## Ref: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/ ## priorityClassName: '' ## Pod affinity preset ## ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity ## Allowed values: soft, hard ## podAffinityPreset: "" ## Pod anti-affinity preset ## Ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity ## Allowed values: soft, hard ## podAntiAffinityPreset: "soft" ## Node affinity preset ## Ref: https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity ## Allowed values: soft, hard ## nodeAffinityPreset: ## Node affinity type ## Allowed values: soft, hard ## type: "" ## Node label key to match ## E.g. ## key: "kubernetes.io/e2e-az-name" ## key: "" ## Node label values to match ## E.g. ## values: ## - e2e-az1 ## - e2e-az2 ## values: [] ## Affinity for pod assignment. Evaluated as a template ## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity ## Note: podAffinityPreset, podAntiAffinityPreset, and nodeAffinityPreset will be ignored when it's set ## affinity: {} ## Node labels for pod assignment. Evaluated as a template ## ref: https://kubernetes.io/docs/user-guide/node-selection/ ## nodeSelector: node.kubernetes.io/role: "user" ## Tolerations for pod assignment. Evaluated as a template ## Ref: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/ ## tolerations: [] ## Topology Spread Constraints for pod assignment spread across your cluster among failure-domains. Evaluated as a template ## Ref: https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/#spread-constraints-for-pods ## topologySpreadConstraints: {} ## RabbitMQ pods' Security Context ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-pod ## podSecurityContext: enabled: true fsGroup: 1001 runAsUser: 1001 ## RabbitMQ containers' Security Context ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/#set-the-security-context-for-a-container ## Example: ## containerSecurityContext: ## capabilities: ## drop: ["NET_RAW"] ## readOnlyRootFilesystem: true ## containerSecurityContext: {} ## RabbitMQ containers' resource requests and limits ## ref: http://kubernetes.io/docs/user-guide/compute-resources/ ## resources: # We usually recommend not to specify default resources and to leave this as a conscious # choice for the user. This also increases chances charts run on environments with little # resources, such as Minikube. If you do want to specify resources, uncomment the following # lines, adjust them as necessary, and remove the curly braces after 'resources:'. limits: cpu: 1000m memory: 1Gi requests: cpu: 1000m memory: 500Mi ## RabbitMQ containers' liveness and readiness probes. ## ref: https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#container-probes ## livenessProbe: enabled: true initialDelaySeconds: 120 timeoutSeconds: 20 periodSeconds: 30 failureThreshold: 6 successThreshold: 1 readinessProbe: enabled: true initialDelaySeconds: 10 timeoutSeconds: 20 periodSeconds: 30 failureThreshold: 3 successThreshold: 1 ## Custom Liveness probe ## customLivenessProbe: {} ## Custom Rediness probe ## customReadinessProbe: {} ## Custom Startup probe ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#define-startup-probes ## customStartupProbe: {} ## Add init containers to the pod ## Example: ## initContainers: ## - name: your-image-name ## image: your-image ## imagePullPolicy: Always ## ports: ## - name: portname ## containerPort: 1234 ## initContainers: {} ## Add sidecars to the pod. ## Example: ## sidecars: ## - name: your-image-name ## image: your-image ## imagePullPolicy: Always ## ports: ## - name: portname ## containerPort: 1234 ## sidecars: {} ## RabbitMQ pods ServiceAccount ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/ ## serviceAccount: ## Specifies whether a ServiceAccount should be created ## create: true ## The name of the ServiceAccount to use. ## If not set and create is true, a name is generated using the rabbitmq.fullname template ## # name: ## Role Based Access ## ref: https://kubernetes.io/docs/admin/authorization/rbac/ ## rbac: ## Whether RBAC rules should be created ## binding RabbitMQ ServiceAccount to a role ## that allows RabbitMQ pods querying the K8s API ## create: true persistence: ## this enables PVC templates that will create one per pod ## enabled: false ## rabbitmq data Persistent Volume Storage Class ## If defined, storageClassName: ## If set to "-", storageClassName: "", which disables dynamic provisioning ## If undefined (the default) or set to null, no storageClassName spec is ## set, choosing the default provisioner. (gp2 on AWS, standard on ## GKE, AWS & OpenStack) ## storageClass: "gp2" ## selector can be used to match an existing PersistentVolume ## selector: ## matchLabels: ## app: my-app selector: matchLabels: type: rabbitmq-pvc accessMode: ReadWriteOnce ## Existing PersistentVolumeClaims ## The value is evaluated as a template ## So, for example, the name can depend on .Release or .Chart # existingClaim: "" ## If you change this value, you might have to adjust `rabbitmq.diskFreeLimit` as well. ## size: 8Gi volumes: # - name: volume_name # emptyDir: {} ## Pod Disruption Budget configuration ## ref: https://kubernetes.io/docs/tasks/run-application/configure-pdb/ ## pdb: create: false ## Min number of pods that must still be available after the eviction ## minAvailable: 1 ## Max number of pods that can be unavailable after the eviction ## # maxUnavailable: 1 ## Network Policy configuration ## ref: https://kubernetes.io/docs/concepts/services-networking/network-policies/ ## networkPolicy: ## Enable creation of NetworkPolicy resources ## enabled: false ## The Policy model to apply. When set to false, only pods with the correct ## client label will have network access to the ports RabbitMQ is listening ## on. When true, RabbitMQ will accept connections from any source ## (with the correct destination port). ## allowExternal: true ## Additional NetworkPolicy Ingress "from" rules to set. Note that all rules are OR-ed. ## # additionalRules: # - matchLabels: # - role: frontend # - matchExpressions: # - key: role # operator: In # values: # - frontend ## Kubernetes service type ## service: type: ClusterIP ## Amqp port ## ref: https://github.com/bitnami/bitnami-docker-rabbitmq#environment-variables ## port: 5672 ## Amqp service port name ## portName: amqp ## Amqp Tls port ## tlsPort: 5671 ## Amqp Tls service port name ## tlsPortName: amqp-ssl ## Node port ## ref: https://github.com/bitnami/bitnami-docker-rabbitmq#environment-variables ## # nodePort: 30672 ## Node port Tls ## # tlsNodePort: 30671 ## Dist port ## ref: https://github.com/bitnami/bitnami-docker-rabbitmq#environment-variables ## distPort: 25672 ## Dist service port name ## distPortName: dist ## Node port (Manager) ## # distNodePort: 30676 ## RabbitMQ Manager port ## ref: https://github.com/bitnami/bitnami-docker-rabbitmq#environment-variables ## managerPortEnabled: true managerPort: 15672 ## RabbitMQ Manager service port name ## managerPortName: http-stats ## Node port (Manager) ## # managerNodePort: 30673 ## RabbitMQ Prometheues metrics port ## metricsPort: 9419 ## RabbitMQ Prometheues metrics service port name ## metricsPortName: metrics ## Node port for metrics ## # metricsNodePort: 30674 ## Node port for EPMD Discovery ## # epmdNodePort: 30675 ## Service port name for EPMD Discovery ## epmdPortName: epmd ## Extra ports to expose ## E.g.: ## extraPorts: ## - name: new_svc_name ## port: 1234 ## targetPort: 1234 ## extraPorts: [] ## Load Balancer sources ## https://kubernetes.io/docs/tasks/access-application-cluster/configure-cloud-provider-firewall/#restrict-access-for-loadbalancer-service ## # loadBalancerSourceRanges: # - 10.10.10.0/24 ## Set the ExternalIPs ## # externalIPs: ## Enable client source IP preservation ## ref http://kubernetes.io/docs/tasks/access-application-cluster/create-external-load-balancer/#preserving-the-client-source-ip ## externalTrafficPolicy: Cluster ## Set the LoadBalancerIP ## # loadBalancerIP: ## Service labels. Evaluated as a template ## labels: {} ## Service annotations. Evaluated as a template ## Example: ## annotations: ## service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0 ## annotations: {} ## Headless Service annotations. Evaluated as a template ## Example: ## annotations: ## external-dns.alpha.kubernetes.io/internal-hostname: rabbitmq.example.com ## annotationsHeadless: {} ## Configure the ingress resource that allows you to access the ## RabbitMQ installation. Set up the URL ## ref: http://kubernetes.io/docs/user-guide/ingress/ ## ingress: ## Set to true to enable ingress record generation ## enabled: true ## Path for the default host. You may need to set this to '/*' in order to use this ## with ALB ingress controllers. ## path: / ## Ingress Path type ## pathType: ImplementationSpecific ## Set this to true in order to add the corresponding annotations for cert-manager ## certManager: false ## When the ingress is enabled, a host pointing to this will be created ## hostname: rabbitmq.local ## Ingress annotations done as key:value pairs ## For a full list of possible ingress annotations, please see ## ref: https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/annotations.md ## ## If certManager is set to true, annotation kubernetes.io/tls-acme: "true" will automatically be set ## annotations: kubernetes.io/ingress.class: nginx ## Enable TLS configuration for the hostname defined at ingress.hostname parameter ## TLS certificates will be retrieved from a TLS secret with name: {{- printf "%s-tls" .Values.ingress.hostname }} ## or a custom one if you use the tls.existingSecret parameter ## You can use the ingress.secrets parameter to create this TLS secret or relay on cert-manager to create it ## tls: false ## existingSecret: name-of-existing-secret ## ## The list of additional hostnames to be covered with this ingress record. ## Most likely the hostname above will be enough, but in the event more hosts are needed, this is an array ## extraHosts: ## - name: rabbitmq.local ## path: / ## ## The tls configuration for additional hostnames to be covered with this ingress record. ## see: https://kubernetes.io/docs/concepts/services-networking/ingress/#tls ## extraTls: ## - hosts: ## - rabbitmq.local ## secretName: rabbitmq.local-tls ## ## If you're providing your own certificates, please use this to add the certificates as secrets ## key and certificate should start with -----BEGIN CERTIFICATE----- or ## -----BEGIN RSA PRIVATE KEY----- ## ## name should line up with a tlsSecret set further up ## If you're using cert-manager, this is unneeded, as it will create the secret for you if it is not set ## ## It is also possible to create and manage the certificates outside of this helm chart ## Please see README.md for more information ## secrets: [] ## - name: rabbitmq.local-tls ## key: ## certificate: ## ## Prometheus Metrics ## metrics: enabled: true plugins: 'rabbitmq_prometheus' ## Prometheus pod annotations ## ref: https://kubernetes.io/docs/concepts/overview/working-with-objects/annotations/ ## podAnnotations: prometheus.io/scrape: 'true' prometheus.io/port: '{{ .Values.service.metricsPort }}' ## Prometheus Service Monitor ## ref: https://github.com/coreos/prometheus-operator ## serviceMonitor: ## If the operator is installed in your cluster, set to true to create a Service Monitor Entry ## enabled: true ## Specify the namespace in which the serviceMonitor resource will be created ## namespace: "monitoring" ## Specify the interval at which metrics should be scraped ## interval: 30s ## Specify the timeout after which the scrape is ended ## # scrapeTimeout: 30s ## Specify Metric Relabellings to add to the scrape endpoint ## # relabellings: ## Specify honorLabels parameter to add the scrape endpoint ## honorLabels: false ## Used to pass Labels that are used by the Prometheus installed in your cluster to select Service Monitors to work with ## ref: https://github.com/coreos/prometheus-operator/blob/master/Documentation/api.md#prometheusspec ## additionalLabels: release: kube-prometheus-stack ## Custom PrometheusRule to be defined ## The value is evaluated as a template, so, for example, the value can depend on .Release or .Chart ## ref: https://github.com/coreos/prometheus-operator#customresourcedefinitions ## prometheusRule: enabled: true additionalLabels: release: kube-prometheus-stack namespace: 'monitoring' ## List of rules, used as template by Helm. ## These are just examples rules inspired from https://awesome-prometheus-alerts.grep.to/rules.html rules: - alert: RabbitmqDown expr: rabbitmq_up{service="{{ template "rabbitmq.fullname" . }}"} == 0 for: 5m labels: severity: error annotations: summary: Rabbitmq down (instance {{ "{{ $labels.instance }}" }}) description: RabbitMQ node down - alert: ClusterDown expr: | sum(rabbitmq_running{service="{{ template "rabbitmq.fullname" . }}"}) < {{ .Values.replicaCount }} for: 5m labels: severity: error annotations: summary: Cluster down (instance {{ "{{ $labels.instance }}" }}) description: | Less than {{ .Values.replicaCount }} nodes running in RabbitMQ cluster VALUE = {{ "{{ $value }}" }} - alert: ClusterPartition expr: rabbitmq_partitions{service="{{ template "rabbitmq.fullname" . }}"} > 0 for: 5m labels: severity: error annotations: summary: Cluster partition (instance {{ "{{ $labels.instance }}" }}) description: | Cluster partition VALUE = {{ "{{ $value }}" }} - alert: OutOfMemory expr: | rabbitmq_node_mem_used{service="{{ template "rabbitmq.fullname" . }}"} / rabbitmq_node_mem_limit{service="{{ template "rabbitmq.fullname" . }}"} * 100 > 90 for: 5m labels: severity: warning annotations: summary: Out of memory (instance {{ "{{ $labels.instance }}" }}) description: | Memory available for RabbmitMQ is low (< 10%)\n VALUE = {{ "{{ $value }}" }} LABELS: {{ "{{ $labels }}" }} - alert: TooManyConnections expr: rabbitmq_connectionsTotal{service="{{ template "rabbitmq.fullname" . }}"} > 1000 for: 5m labels: severity: warning annotations: summary: Too many connections (instance {{ "{{ $labels.instance }}" }}) description: | RabbitMQ instance has too many connections (> 1000) VALUE = {{ "{{ $value }}" }}\n LABELS: {{ "{{ $labels }}" }} #rules: [] ## Init Container parameters ## Change the owner and group of the persistent volume(s) mountpoint(s) to 'runAsUser:fsGroup' on each component ## values from the securityContext section of the component ## volumePermissions: enabled: false image: registry: docker.io repository: bitnami/shell tag: "10" ## Specify a imagePullPolicy ## Defaults to 'Always' if image tag is 'latest', else set to 'IfNotPresent' ## ref: http://kubernetes.io/docs/user-guide/images/#pre-pulling-images ## pullPolicy: IfNotPresent ## Optionally specify an array of imagePullSecrets (secrets must be manually created in the namespace) ## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/ ## Example: ## pullSecrets: ## - myRegistryKeySecretName ## pullSecrets: - docker-pull-secrets ## Init Container resource requests and limits ## ref: http://kubernetes.io/docs/user-guide/compute-resources/ ## resources: # We usually recommend not to specify default resources and to leave this as a conscious # choice for the user. This also increases chances charts run on environments with little # resources, such as Minikube. If you do want to specify resources, uncomment the following # lines, adjust them as necessary, and remove the curly braces after 'resources:'. limits: {} # cpu: 100m # memory: 128Mi requests: {} # cpu: 100m # memory: 128Mi ```
javsalgar commented 3 years ago

Awesome! Thanks for sharing!

github-actions[bot] commented 3 years ago

This Issue has been automatically marked as "stale" because it has not had recent activity (for 15 days). It will be closed if no further activity occurs. Thanks for the feedback.

github-actions[bot] commented 3 years ago

Due to the lack of activity in the last 5 days since it was marked as "stale", we proceed to close this Issue. Do not hesitate to reopen it later if necessary.