helm / charts

⚠️(OBSOLETE) Curated applications for Kubernetes
Apache License 2.0
15.49k stars 16.82k forks source link

[stable/rabbitmq-ha] "HTTP access denied: user ‘guest’ - invalid credentials" after adding PV functionality #10065

Closed hjartardottir closed 5 years ago

hjartardottir commented 5 years ago

Is this a request for help?: Request for help

Is this a BUG REPORT or FEATURE REQUEST? (choose one): Bug report

Version of Helm and Kubernetes: Kubernetes 1.10.4 Helm 2.11.0

Which chart: stable/rabbitmq-ha

What happened: Upgraded rabbitmq-ha chart to use PV option on a running cluster, where we weren't using PVs before. After deleting the pods, deleting the statefulset and upgrading the helm chart the rabbitmq-ha pods error out: HTTP access denied: user ‘guest’ - invalid credentials PLAIN login refused: user ‘guest’ - invalid credentials

Tried using the rabbitmq-cli to set the password for guest manually, but no luck.

What you expected to happen: Pods should be running using correct password.

How to reproduce it (as minimally and precisely as possible): Install rabbitmq-ha helm chart without using PV option. After cluster has started, delete the rabbitmq pods and statefulset and upgrade helm chart, this time using PVs.

Anything else we need to know:

steven-sheehy commented 5 years ago

Please provide your custom values.yaml, chart version and logs.

hjartardottir commented 5 years ago

@steven-sheehy We're using a slightly configured version of the stable rabbitmq-ha helm chart from 4. July.

values.yaml:


## RabbitMQ application credentials
## Ref: http://rabbitmq.com/access-control.html
##
# rabbitmqPassword:

## RabbitMQ default VirtualHost
## Ref: https://www.rabbitmq.com/vhosts.html
##
rabbitmqVhost: "/"

## Erlang cookie to determine whether different nodes are allowed to communicate with each other
## Ref: https://www.rabbitmq.com/clustering.html
##
# rabbitmqErlangCookie:

## RabbitMQ Memory high watermark
## Ref: http://www.rabbitmq.com/memory.html
##
rabbitmqMemoryHighWatermark: 256MB

## EPMD port for peer discovery service used by RabbitMQ nodes and CLI tools
## Ref: https://www.rabbitmq.com/clustering.html
##
rabbitmqEpmdPort: 4369

## Node port
rabbitmqNodePort: 5672

## Manager port
rabbitmqManagerPort: 15672

## Set to true to precompile parts of RabbitMQ with HiPE, a just-in-time
## compiler for Erlang. This will increase server throughput at the cost of
## increased startup time. You might see 20-50% better performance at the cost
## of a few minutes delay at startup.
rabbitmqHipeCompile: false

## SSL certificates
## Red: http://www.rabbitmq.com/ssl.html
rabbitmqCert:
  enabled: false

  ## Specify an existing secret to use
  existingSecret: |

  ## Create a new secret using these values
  cacertfile: |
  certfile: |
  keyfile: |

## Authentication mechanism
## Ref: http://www.rabbitmq.com/authentication.html
rabbitmqAuth:
  enabled: true

  config: |
    auth_mechanisms.1 = PLAIN
    # auth_mechanisms.2 = AMQPLAIN
    # auth_mechanisms.3 = EXTERNAL

## LDAP Plugin
## Ref: http://www.rabbitmq.com/ldap.html
rabbitmqLDAPPlugin:
  enabled: false

  ## LDAP configuration:
  config: |
    # auth_backends.1 = ldap
    # auth_ldap.servers.1  = my-ldap-server
    # auth_ldap.user_dn_pattern = cn=${username},ou=People,dc=example,dc=com
    # auth_ldap.use_ssl    = false
    # auth_ldap.port       = 389
    # auth_ldap.log        = false

## MQTT Plugin
## Ref: http://www.rabbitmq.com/mqtt.html
rabbitmqMQTTPlugin:
  enabled: false

  ## MQTT configuration:
  config: |
    # mqtt.default_user     = guest
    # mqtt.default_pass     = guest
    # mqtt.allow_anonymous  = true

## Web MQTT Plugin
## Ref: http://www.rabbitmq.com/web-mqtt.html
rabbitmqWebMQTTPlugin:
  enabled: false

  ## Web MQTT configuration:
  config: |
    # web_mqtt.ssl.port       = 12345
    # web_mqtt.ssl.backlog    = 1024
    # web_mqtt.ssl.certfile   = /etc/cert/cacert.pem
    # web_mqtt.ssl.keyfile    = /etc/cert/cert.pem
    # web_mqtt.ssl.cacertfile = /etc/cert/key.pem
    # web_mqtt.ssl.password   = changeme

## STOMP Plugin
## Ref: http://www.rabbitmq.com/stomp.html
rabbitmqSTOMPPlugin:
  enabled: false

  ## STOMP configuration:
  config: |
    # stomp.default_user = guest
    # stomp.default_pass = guest

## Web STOMP Plugin
## Ref: http://www.rabbitmq.com/web-stomp.html
rabbitmqWebSTOMPPlugin:
  enabled: false

  ## Web STOMP configuration:
  config: |
    # web_stomp.ws_frame = binary
    # web_stomp.cowboy_opts.max_keepalive = 10

## AMQPS support
## Ref: http://www.rabbitmq.com/ssl.html
rabbitmqAmqpsSupport:
  enabled: false

  # NodePort
  amqpsNodePort: 5671

  # SSL configuration
  config: |
    # listeners.ssl.default             = 5671
    # ssl_options.cacertfile            = /etc/cert/cacert.pem
    # ssl_options.certfile              = /etc/cert/cert.pem
    # ssl_options.keyfile               = /etc/cert/key.pem
    # ssl_options.verify                = verify_peer
    # ssl_options.fail_if_no_peer_cert  = false

## Number of replicas
replicaCount: 3

image:
  repository: bitnami/rabbitmq
  tag: 3.7-alpine-0.0.6
  pullPolicy: IfNotPresent

## Duration in seconds the pod needs to terminate gracefully
terminationGracePeriodSeconds: 10

service:
  annotations: {}
  clusterIP: None

  ## List of IP addresses at which the service is available
  ## Ref: https://kubernetes.io/docs/user-guide/services/#external-ips
  ##
  externalIPs: []

  loadBalancerIP: ""
  loadBalancerSourceRanges: []
  type: ClusterIP

## Statefulsets rolling update update strategy
## Ref: https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/#rolling-update
##
updateStrategy: RollingUpdate

## We usually recommend not to specify default resources and to leave this as
## a conscious choice for the user. This also increases chances charts run on
## environments with little resources, such as Minikube. If you do want to
## specify resources, uncomment the following lines, adjust them as necessary,
## and remove the curly braces after 'resources:'.
## If you decide to set the memory limit, make sure to also change the
## rabbitmqMemoryHighWatermark following the formula:
##   rabbitmqMemoryHighWatermark = 0.4 * resources.limits.memory
##
resources: {}
# limits:
#  cpu: 100m
#  memory: 1Gi
# requests:
#  cpu: 100m
#  memory: 1Gi

## Data Persistency
persistentVolume:
  enabled: true
  ## If defined, storageClassName: <storageClass>
  ## If set to "-", storageClassName: "", which disables dynamic provisioning
  ## If undefined (the default) or set to null, no storageClassName spec is
  ##   set, choosing the default provisioner.  (gp2 on AWS, standard on
  ##   GKE, AWS & OpenStack)
  ##
  # storageClass: "-"
  name: data
  accessModes:
    - ReadWriteOnce
  size: 8Gi
  annotations: {}

## Node labels for pod assignment
## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector
##
nodeSelector: {}

## Node tolerations for pod assignment
## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#taints-and-tolerations-beta-feature
##
tolerations: []

## Pod affinity
## Ref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity
podAntiAffinity: soft

## Create default configMap
##
customConfigMap: false

## Add additional labels to all resources
##
extraLabels: {}

## Create default Secret
##
customSecret: false

## Role Based Access
## Ref: https://kubernetes.io/docs/admin/authorization/rbac/
##
rbac:
  create: true

## Service Account
## Ref: https://kubernetes.io/docs/admin/service-accounts-admin/
##
serviceAccount:
  create: true

  ## The name of the ServiceAccount to use.
  ## If not set and create is true, a name is generated using the fullname template
  # name:

ingress:
  ## Set to true to enable ingress record generation
  enabled: false

  path: /

  ## The list of hostnames to be covered with this ingress record.
  ## Most likely this will be just one host, but in the event more hosts are needed, this is an array
  ## hostName: foo.bar.com

  ## Set this to true in order to enable TLS on the ingress record
  tls: false

  ## If TLS is set to true, you must declare what secret will store the key/certificate for TLS
  tlsSecret: myTlsSecret

  ## Ingress annotations done as key:value pairs
  annotations:
  #  kubernetes.io/ingress.class: nginx

livenessProbe:
  initialDelaySeconds: 120
  timeoutSeconds: 5
  failureThreshold: 6

readinessProbe:
  initialDelaySeconds: 10
  timeoutSeconds: 3
  periodSeconds: 5

logs:

2018-12-17 13:22:43.208 [info] <0.33.0> Application lager started on node 'rabbit@nfsaas-rabbitmq-0.nfsaas-rabbitmq.default.svc.cluster.local'
2018-12-17 13:22:46.355 [info] <0.33.0> Application xmerl started on node 'rabbit@nfsaas-rabbitmq-0.nfsaas-rabbitmq.default.svc.cluster.local'
2018-12-17 13:22:46.356 [info] <0.33.0> Application os_mon started on node 'rabbit@nfsaas-rabbitmq-0.nfsaas-rabbitmq.default.svc.cluster.local'
2018-12-17 13:22:46.356 [info] <0.33.0> Application jsx started on node 'rabbit@nfsaas-rabbitmq-0.nfsaas-rabbitmq.default.svc.cluster.local'
2018-12-17 13:22:46.357 [info] <0.33.0> Application inets started on node 'rabbit@nfsaas-rabbitmq-0.nfsaas-rabbitmq.default.svc.cluster.local'
2018-12-17 13:22:46.492 [info] <0.33.0> Application mnesia started on node 'rabbit@nfsaas-rabbitmq-0.nfsaas-rabbitmq.default.svc.cluster.local'
2018-12-17 13:22:46.492 [info] <0.33.0> Application amqp10_common started on node 'rabbit@nfsaas-rabbitmq-0.nfsaas-rabbitmq.default.svc.cluster.local'
2018-12-17 13:22:46.493 [info] <0.33.0> Application crypto started on node 'rabbit@nfsaas-rabbitmq-0.nfsaas-rabbitmq.default.svc.cluster.local'
2018-12-17 13:22:46.493 [info] <0.33.0> Application cowlib started on node 'rabbit@nfsaas-rabbitmq-0.nfsaas-rabbitmq.default.svc.cluster.local'
2018-12-17 13:22:46.493 [info] <0.33.0> Application asn1 started on node 'rabbit@nfsaas-rabbitmq-0.nfsaas-rabbitmq.default.svc.cluster.local'
2018-12-17 13:22:46.493 [info] <0.33.0> Application public_key started on node 'rabbit@nfsaas-rabbitmq-0.nfsaas-rabbitmq.default.svc.cluster.local'
2018-12-17 13:22:46.494 [info] <0.33.0> Application ssl started on node 'rabbit@nfsaas-rabbitmq-0.nfsaas-rabbitmq.default.svc.cluster.local'
2018-12-17 13:22:46.499 [info] <0.33.0> Application amqp10_client started on node 'rabbit@nfsaas-rabbitmq-0.nfsaas-rabbitmq.default.svc.cluster.local'
2018-12-17 13:22:46.500 [info] <0.33.0> Application ranch started on node 'rabbit@nfsaas-rabbitmq-0.nfsaas-rabbitmq.default.svc.cluster.local'
2018-12-17 13:22:46.504 [info] <0.33.0> Application cowboy started on node 'rabbit@nfsaas-rabbitmq-0.nfsaas-rabbitmq.default.svc.cluster.local'
2018-12-17 13:22:46.504 [info] <0.33.0> Application ranch_proxy_protocol started on node 'rabbit@nfsaas-rabbitmq-0.nfsaas-rabbitmq.default.svc.cluster.local'
2018-12-17 13:22:46.504 [info] <0.33.0> Application recon started on node 'rabbit@nfsaas-rabbitmq-0.nfsaas-rabbitmq.default.svc.cluster.local'
2018-12-17 13:22:46.504 [info] <0.33.0> Application rabbit_common started on node 'rabbit@nfsaas-rabbitmq-0.nfsaas-rabbitmq.default.svc.cluster.local'
2018-12-17 13:22:46.508 [info] <0.33.0> Application amqp_client started on node 'rabbit@nfsaas-rabbitmq-0.nfsaas-rabbitmq.default.svc.cluster.local'
2018-12-17 13:22:46.508 [info] <0.238.0>
 Starting RabbitMQ 3.7.3 on Erlang 20.1.7
 Copyright (C) 2007-2018 Pivotal Software, Inc.
 Licensed under the MPL.  See http://www.rabbitmq.com/

  ##  ##
  ##  ##      RabbitMQ 3.7.3. Copyright (C) 2007-2018 Pivotal Software, Inc.
  ##########  Licensed under the MPL.  See http://www.rabbitmq.com/
  ######  ##
  ##########  Logs: <stdout>

              Starting broker...
2018-12-17 13:22:46.509 [info] <0.238.0>
 node           : rabbit@nfsaas-rabbitmq-0.nfsaas-rabbitmq.default.svc.cluster.local
 home dir       : /var/lib/rabbitmq
 config file(s) : /etc/rabbitmq/rabbitmq.conf
 cookie hash    : +x9aVCiMKRbPJIkjBXi/JQ==
 log(s)         : <stdout>
 database dir   : /var/lib/rabbitmq/mnesia/rabbit@nfsaas-rabbitmq-0.nfsaas-rabbitmq.default.svc.cluster.local
2018-12-17 13:22:46.925 [info] <0.244.0> Memory high watermark set to 244 MiB (256000000 bytes) of 32168 MiB (33731403776 bytes) total
2018-12-17 13:22:46.930 [info] <0.246.0> Enabling free disk space monitoring
2018-12-17 13:22:46.930 [info] <0.246.0> Disk free limit set to 50MB
2018-12-17 13:22:46.934 [info] <0.248.0> Limiting to approx 1048476 file handles (943626 sockets)
2018-12-17 13:22:46.934 [info] <0.249.0> FHC read buffering:  OFF
2018-12-17 13:22:46.934 [info] <0.249.0> FHC write buffering: ON
2018-12-17 13:22:46.952 [info] <0.238.0> Waiting for Mnesia tables for 30000 ms, 9 retries left
2018-12-17 13:23:16.953 [warning] <0.238.0> Error while waiting for Mnesia tables: {timeout_waiting_for_tables,[rabbit_user,rabbit_user_permission,rabbit_topic_permission,rabbit_vhost,rabbit_durable_route,rabbit_durable_exchange,rabbit_runtime_parameters,rabbit_durable_queue]}
2018-12-17 13:23:16.953 [info] <0.238.0> Waiting for Mnesia tables for 30000 ms, 8 retries left
2018-12-17 13:23:46.954 [warning] <0.238.0> Error while waiting for Mnesia tables: {timeout_waiting_for_tables,[rabbit_user,rabbit_user_permission,rabbit_topic_permission,rabbit_vhost,rabbit_durable_route,rabbit_durable_exchange,rabbit_runtime_parameters,rabbit_durable_queue]}
2018-12-17 13:23:46.954 [info] <0.238.0> Waiting for Mnesia tables for 30000 ms, 7 retries left
2018-12-17 13:24:16.955 [warning] <0.238.0> Error while waiting for Mnesia tables: {timeout_waiting_for_tables,[rabbit_user,rabbit_user_permission,rabbit_topic_permission,rabbit_vhost,rabbit_durable_route,rabbit_durable_exchange,rabbit_runtime_parameters,rabbit_durable_queue]}
2018-12-17 13:24:16.955 [info] <0.238.0> Waiting for Mnesia tables for 30000 ms, 6 retries left
2018-12-17 13:24:46.956 [warning] <0.238.0> Error while waiting for Mnesia tables: {timeout_waiting_for_tables,[rabbit_user,rabbit_user_permission,rabbit_topic_permission,rabbit_vhost,rabbit_durable_route,rabbit_durable_exchange,rabbit_runtime_parameters,rabbit_durable_queue]}
2018-12-17 13:24:46.956 [info] <0.238.0> Waiting for Mnesia tables for 30000 ms, 5 retries left
2018-12-17 13:24:58.918 [info] <0.238.0> Waiting for Mnesia tables for 30000 ms, 9 retries left
2018-12-17 13:24:58.918 [info] <0.238.0> Peer discovery backend rabbit_peer_discovery_k8s does not support registration, skipping registration.
2018-12-17 13:24:58.937 [info] <0.238.0> Priority queues enabled, real BQ is rabbit_variable_queue
2018-12-17 13:24:58.979 [info] <0.501.0> Starting rabbit_node_monitor
2018-12-17 13:24:59.074 [info] <0.238.0> Management plugin: using rates mode 'basic'
2018-12-17 13:24:59.075 [info] <0.535.0> Making sure data directory '/var/lib/rabbitmq/mnesia/rabbit@nfsaas-rabbitmq-0.nfsaas-rabbitmq.default.svc.cluster.local/msg_stores/vhosts/628WB79CIFDYO9LJI6DKMI09L' for vhost '/' exists
2018-12-17 13:24:59.091 [info] <0.535.0> Starting message stores for vhost '/'
2018-12-17 13:24:59.091 [info] <0.539.0> Message store "628WB79CIFDYO9LJI6DKMI09L/msg_store_transient": using rabbit_msg_store_ets_index to provide index
2018-12-17 13:24:59.094 [info] <0.535.0> Started message store of type transient for vhost '/'
2018-12-17 13:24:59.095 [info] <0.542.0> Message store "628WB79CIFDYO9LJI6DKMI09L/msg_store_persistent": using rabbit_msg_store_ets_index to provide index
2018-12-17 13:24:59.121 [info] <0.535.0> Started message store of type persistent for vhost '/'
2018-12-17 13:24:59.127 [info] <0.577.0> started TCP Listener on [::]:5672
2018-12-17 13:24:59.129 [info] <0.501.0> rabbit on node 'rabbit@nfsaas-rabbitmq-1.nfsaas-rabbitmq.default.svc.cluster.local' up
2018-12-17 13:24:59.138 [info] <0.238.0> Setting up a table for connection tracking on this node: 'tracked_connection_on_node_rabbit@nfsaas-rabbitmq-0.nfsaas-rabbitmq.default.svc.cluster.local'
2018-12-17 13:24:59.140 [info] <0.238.0> Setting up a table for per-vhost connection counting on this node: 'tracked_connection_per_vhost_on_node_rabbit@nfsaas-rabbitmq-0.nfsaas-rabbitmq.default.svc.cluster.local'
2018-12-17 13:24:59.140 [info] <0.33.0> Application rabbit started on node 'rabbit@nfsaas-rabbitmq-0.nfsaas-rabbitmq.default.svc.cluster.local'
2018-12-17 13:24:59.141 [info] <0.585.0> Peer discovery: enabling node cleanup (will only log warnings). Check interval: 10 seconds.
2018-12-17 13:24:59.141 [info] <0.33.0> Application rabbitmq_peer_discovery_common started on node 'rabbit@nfsaas-rabbitmq-0.nfsaas-rabbitmq.default.svc.cluster.local'
2018-12-17 13:24:59.141 [info] <0.33.0> Application rabbitmq_peer_discovery_k8s started on node 'rabbit@nfsaas-rabbitmq-0.nfsaas-rabbitmq.default.svc.cluster.local'
2018-12-17 13:24:59.141 [info] <0.33.0> Application rabbitmq_consistent_hash_exchange started on node 'rabbit@nfsaas-rabbitmq-0.nfsaas-rabbitmq.default.svc.cluster.local'
2018-12-17 13:24:59.141 [info] <0.33.0> Application rabbitmq_federation started on node 'rabbit@nfsaas-rabbitmq-0.nfsaas-rabbitmq.default.svc.cluster.local'
2018-12-17 13:24:59.172 [info] <0.501.0> rabbit on node 'rabbit@nfsaas-rabbitmq-2.nfsaas-rabbitmq.default.svc.cluster.local' up
2018-12-17 13:24:59.222 [info] <0.501.0> rabbit on node 'rabbit@nfsaas-rabbitmq-1.nfsaas-rabbitmq.default.svc.cluster.local' up
2018-12-17 13:24:59.249 [info] <0.33.0> Application rabbitmq_shovel started on node 'rabbit@nfsaas-rabbitmq-0.nfsaas-rabbitmq.default.svc.cluster.local'
2018-12-17 13:24:59.288 [info] <0.33.0> Application rabbitmq_management_agent started on node 'rabbit@nfsaas-rabbitmq-0.nfsaas-rabbitmq.default.svc.cluster.local'
2018-12-17 13:24:59.288 [info] <0.33.0> Application rabbitmq_amqp1_0 started on node 'rabbit@nfsaas-rabbitmq-0.nfsaas-rabbitmq.default.svc.cluster.local'
2018-12-17 13:24:59.289 [info] <0.33.0> Application rabbitmq_web_dispatch started on node 'rabbit@nfsaas-rabbitmq-0.nfsaas-rabbitmq.default.svc.cluster.local'
2018-12-17 13:24:59.335 [info] <0.650.0> Management plugin started. Port: 15672
2018-12-17 13:24:59.335 [info] <0.756.0> Statistics database started.
2018-12-17 13:24:59.335 [info] <0.33.0> Application rabbitmq_management started on node 'rabbit@nfsaas-rabbitmq-0.nfsaas-rabbitmq.default.svc.cluster.local'
2018-12-17 13:24:59.335 [info] <0.33.0> Application rabbitmq_shovel_management started on node 'rabbit@nfsaas-rabbitmq-0.nfsaas-rabbitmq.default.svc.cluster.local'
2018-12-17 13:24:59.336 [info] <0.33.0> Application rabbitmq_federation_management started on node 'rabbit@nfsaas-rabbitmq-0.nfsaas-rabbitmq.default.svc.cluster.local'
2018-12-17 13:24:59.336 [info] <0.33.0> Application rabbitmq_auth_mechanism_ssl started on node 'rabbit@nfsaas-rabbitmq-0.nfsaas-rabbitmq.default.svc.cluster.local'
2018-12-17 13:24:59.656 [info] <0.5.0> Server startup complete; 12 plugins started.
 * rabbitmq_auth_mechanism_ssl
 * rabbitmq_federation_management
 * rabbitmq_shovel_management
 * rabbitmq_management
 * rabbitmq_web_dispatch
 * rabbitmq_amqp1_0
 * rabbitmq_management_agent
 * rabbitmq_shovel
 * rabbitmq_federation
 * rabbitmq_consistent_hash_exchange
 * rabbitmq_peer_discovery_k8s
 * rabbitmq_peer_discovery_common
 completed with 12 plugins.
2018-12-17 13:25:02.490 [warning] <0.772.0> HTTP access denied: user 'guest' - invalid credentials
2018-12-17 13:25:02.507 [warning] <0.774.0> HTTP access denied: user 'guest' - invalid credentials
2018-12-17 13:25:20.570 [info] <0.501.0> rabbit on node 'rabbit@nfsaas-rabbitmq-2.nfsaas-rabbitmq.default.svc.cluster.local' up
2018-12-17 13:25:27.050 [warning] <0.809.0> HTTP access denied: user 'guest' - invalid credentials
2018-12-17 13:25:27.067 [warning] <0.811.0> HTTP access denied: user 'guest' - invalid credentials
2018-12-17 13:25:32.494 [warning] <0.822.0> HTTP access denied: user 'guest' - invalid credentials
2018-12-17 13:25:57.051 [warning] <0.859.0> HTTP access denied: user 'guest' - invalid credentials
2018-12-17 13:25:57.065 [warning] <0.861.0> HTTP access denied: user 'guest' - invalid credentials
2018-12-17 13:26:02.488 [warning] <0.872.0> HTTP access denied: user 'guest' - invalid credentials
2018-12-17 13:26:27.086 [warning] <0.904.0> HTTP access denied: user 'guest' - invalid credentials
2018-12-17 13:26:32.496 [warning] <0.915.0> HTTP access denied: user 'guest' - invalid credentials
2018-12-17 13:26:57.065 [warning] <0.948.0> HTTP access denied: user 'guest' - invalid credentials
2018-12-17 13:27:02.485 [warning] <0.959.0> HTTP access denied: user 'guest' - invalid credentials
2018-12-17 13:27:02.506 [warning] <0.961.0> HTTP access denied: user 'guest' - invalid credentials
2018-12-17 13:27:27.061 [warning] <0.993.0> HTTP access denied: user 'guest' - invalid credentials
2018-12-17 13:27:32.481 [warning] <0.1004.0> HTTP access denied: user 'guest' - invalid credentials
2018-12-17 13:27:32.506 [warning] <0.1006.0> HTTP access denied: user 'guest' - invalid credentials
2018-12-17 13:27:57.058 [warning] <0.1039.0> HTTP access denied: user 'guest' - invalid credentials
2018-12-17 13:28:02.499 [warning] <0.1050.0> HTTP access denied: user 'guest' - invalid credentials
2018-12-17 13:28:27.057 [warning] <0.1082.0> HTTP access denied: user 'guest' - invalid credentials
2018-12-17 13:28:32.482 [warning] <0.1093.0> HTTP access denied: user 'guest' - invalid credentials
2018-12-17 13:28:32.504 [warning] <0.1095.0> HTTP access denied: user 'guest' - invalid credentials
2018-12-17 13:28:57.058 [warning] <0.1137.0> HTTP access denied: user 'guest' - invalid credentials
steven-sheehy commented 5 years ago

I can't tell what you changed from default in that values.yaml. Please provide only your custom values.yaml. You can get this from helm get values [release-name]. Also, that chart version is very old and customized. Please try upgrading to the latest with no customizations to verify you still have the issue.

kchugalinskiy commented 5 years ago

The problem is not connected to configuration. It seems like env variable should be renamed from RABBIT_USER to RABBIT_DEFAULT_USER and the same should be applied for the password. It seems like the root of this problem starts from bad rabbitmq docker image tagging, which is tied to OS version instead of rabbitmq version.

PS: my fault, it's not a fix

stale[bot] commented 5 years ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions.

stale[bot] commented 5 years ago

This issue is being automatically closed due to inactivity.

tschirmer commented 4 years ago

Getting this too.

tschirmer commented 4 years ago

Found in my case that Sysdig was trying to reftch metric data directly, so this wasn't a problem with helm. was able to setup the metrics username and password with this config: https://docs.sysdig.com/en/rabbitmq.html