bitnami / charts

Bitnami Helm Charts
https://bitnami.com
Other
8.85k stars 9.14k forks source link

[bitnami/rabbitmq] How do I obtain the root password for the rabbitmq image? #13017

Closed myysophia closed 1 year ago

myysophia commented 1 year ago

Name and Version

bitnami/rabbitmq:3.10.8-debian-11-r4

What steps will reproduce the bug?

my question: I need to modify the /etc/profile, but the permissions are insufficient my action image-20221019110741665

How do I change the root password? Could you tell me if you know? This one is safe. Thank you

Are you using any custom parameters or values?

1

What is the expected behavior?

2

What do you see instead?

1

Additional information

No response

myysophia commented 1 year ago

i config the value.yaml

podSecurityContext:
  enabled: true
  fsGroup: 0 # default 1001
containerSecurityContext:
  enabled: true
  runAsUser: 0
  runAsNonRoot: false

the pod start has an error:

Readiness probe failed: Error: unable to perform an operation on node 'rabbit@chot-rabbitmq-0.chot-rabbitmq-headless.rabbitmq.svc.cluster.local'. Please see diagnostics information and suggestions below. Most common reasons for this are: 

* Target node is unreachable (e.g. due to hostname resolution, TCP connection or firewall issues) 
* CLI tool fails to authenticate with the server (e.g. due to CLI tool's Erlang cookie not matching that of the server) 
* Target node is not running In addition to the diagnostics info below: 
* See the CLI, clustering and networking guides on https://rabbitmq.com/documentation.html to learn more 
* Consult server logs on node rabbit@chot-rabbitmq-0.chot-rabbitmq-headless.rabbitmq.svc.cluster.local 
* If target node is configured to use long node names, don't forget to use --longnames with CLI tools DIAGNOSTICS =========== attempted to contact: ['rabbit@chot-rabbitmq-0.chot-rabbitmq-headless.rabbitmq.svc.cluster.local'] rabbit@chot-rabbitmq-0.chot-rabbitmq-headless.rabbitmq.svc.cluster.local: 
* connected to epmd (port 4369) on chot-rabbitmq-0.chot-rabbitmq-headless.rabbitmq.svc.cluster.local 
* epmd reports: node 'rabbit' not running at all no other nodes on chot-rabbitmq-0.chot-rabbitmq-headless.rabbitmq.svc.cluster.local 
* suggestion: start the node Current node details: *
*  node name: 'rabbitmqcli-216-rabbit@chot-rabbitmq-0.chot-rabbitmq-headless.rabbitmq.svc.cluster.local' * effective user's home directory: /opt/bitnami/rabbitmq/.rabbitmq 
* Erlang cookie hash: 5w3a
javsalgar commented 1 year ago

Hi,

Could you add more details about your case? What is the reason for modifying the /etc/profile file? Normally our containers are meant to run as non-root, so they are compatible with environments like Openshift.

myysophia commented 1 year ago

hi,What I want to do is the rabbitmq pressure test in my k8s cls。 eg: https://rabbitmq.github.io/rabbitmq-perf-test/stable/htmlsingle/#basic-usage

Look Forward To Your Reply](javascript:)

javsalgar commented 1 year ago

If you just want the container to run as root, apart from setting the user to 0, you need to enable the containerSecurityContext and podSecurityContext, because otherwise it will use the default value which is non root.

myysophia commented 1 year ago

I set the containerSecurityContext and podSecurityContext, It seems not have root privilege. :( image

javsalgar commented 1 year ago

Could you execute kubectl get pods -o yaml to see if the securityContext section is set?

github-actions[bot] commented 1 year ago

This Issue has been automatically marked as "stale" because it has not had recent activity (for 15 days). It will be closed if no further activity occurs. Thanks for the feedback.

github-actions[bot] commented 1 year ago

Due to the lack of activity in the last 5 days since it was marked as "stale", we proceed to close this Issue. Do not hesitate to reopen it later if necessary.

robinpecha commented 11 months ago

I have exactly the same problem, Spent the last two days on it. I need root access to mongodb chart to perform benchmark test and monitor used resources directly in pod/container. image

output of `kubectl get pods mongodb-0 --namespace mongodb -o yaml `

``` apiVersion: v1 kind: Pod metadata: creationTimestamp: "2023-10-17T08:14:52Z" generateName: mongodb- labels: app.kubernetes.io/component: mongodb app.kubernetes.io/instance: mongodb app.kubernetes.io/managed-by: Helm app.kubernetes.io/name: mongodb app.kubernetes.io/version: 7.0.2 controller-revision-hash: mongodb-xxxxx helm.sh/chart: mongodb-14.0.10 statefulset.kubernetes.io/pod-name: mongodb-0 name: mongodb-0 namespace: mongodb ownerReferences: - apiVersion: apps/v1 blockOwnerDeletion: true controller: true kind: StatefulSet name: mongodb resourceVersion: "80400004" spec: affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - podAffinityTerm: labelSelector: matchLabels: app.kubernetes.io/component: mongodb app.kubernetes.io/instance: mongodb app.kubernetes.io/name: mongodb topologyKey: kubernetes.io/hostname weight: 1 containers: - command: - /scripts/setup.sh env: - name: BITNAMI_DEBUG value: "false" - name: MY_POD_NAME valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.name - name: MY_POD_NAMESPACE valueFrom: fieldRef: apiVersion: v1 fieldPath: metadata.namespace - name: MY_POD_HOST_IP valueFrom: fieldRef: apiVersion: v1 fieldPath: status.hostIP - name: K8S_SERVICE_NAME value: mongodb-headless - name: MONGODB_INITIAL_PRIMARY_HOST value: mongodb-0.$(K8S_SERVICE_NAME).$(MY_POD_NAMESPACE).svc.cluster.local - name: MONGODB_REPLICA_SET_NAME value: rs0 - name: MONGODB_ADVERTISED_HOSTNAME value: $(MY_POD_NAME).$(K8S_SERVICE_NAME).$(MY_POD_NAMESPACE).svc.cluster.local - name: ALLOW_EMPTY_PASSWORD value: "yes" - name: MONGODB_SYSTEM_LOG_VERBOSITY value: "0" - name: MONGODB_DISABLE_SYSTEM_LOG value: "no" - name: MONGODB_DISABLE_JAVASCRIPT value: "no" - name: MONGODB_ENABLE_JOURNAL value: "yes" - name: MONGODB_PORT_NUMBER value: "27017" - name: MONGODB_ENABLE_IPV6 value: "no" - name: MONGODB_ENABLE_DIRECTORY_PER_DB value: "no" image: docker.io/bitnami/mongodb:7.0.2-debian-11-r6 imagePullPolicy: IfNotPresent livenessProbe: exec: command: - /bitnami/scripts/ping-mongodb.sh failureThreshold: 6 initialDelaySeconds: 30 periodSeconds: 20 successThreshold: 1 timeoutSeconds: 10 name: mongodb ports: - containerPort: 27017 name: mongodb protocol: TCP readinessProbe: exec: command: - /bitnami/scripts/readiness-probe.sh failureThreshold: 6 initialDelaySeconds: 5 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 5 resources: requests: cpu: "12" memory: 96Gi securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL runAsGroup: 0 runAsNonRoot: false runAsUser: 0 seccompProfile: type: RuntimeDefault terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /bitnami/mongodb name: datadir - mountPath: /bitnami/scripts name: common-scripts - mountPath: /scripts/setup.sh name: scripts subPath: setup.sh - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-gw66z readOnly: true - args: - "/bin/mongodb_exporter --collector.diagnosticdata --collector.replicasetstatus --compatible-mode --mongodb.direct-connect --mongodb.global-conn-pool --web.listen-address \":9216\" --mongodb.uri \"mongodb://localhost:27017/admin?\" \n" command: - /bin/bash - -ec image: docker.io/bitnami/mongodb-exporter:0.39.0-debian-11-r123 imagePullPolicy: IfNotPresent livenessProbe: failureThreshold: 3 httpGet: path: / port: metrics scheme: HTTP initialDelaySeconds: 15 periodSeconds: 5 successThreshold: 1 timeoutSeconds: 10 name: metrics ports: - containerPort: 9216 name: metrics protocol: TCP readinessProbe: failureThreshold: 3 httpGet: path: / port: metrics scheme: HTTP initialDelaySeconds: 5 periodSeconds: 5 successThreshold: 1 timeoutSeconds: 10 resources: {} securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL runAsGroup: 0 runAsNonRoot: false runAsUser: 0 seccompProfile: type: RuntimeDefault terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-gw66z readOnly: true dnsPolicy: ClusterFirst enableServiceLinks: true hostname: mongodb-0 nodeName: xxxx.eu-west-2.compute.internal preemptionPolicy: PreemptLowerPriority priority: 0 restartPolicy: Always schedulerName: default-scheduler securityContext: fsGroup: 1001 serviceAccount: mongodb serviceAccountName: mongodb subdomain: mongodb-headless terminationGracePeriodSeconds: 30 tolerations: - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 volumes: - name: datadir persistentVolumeClaim: claimName: datadir-mongodb-0 - configMap: defaultMode: 360 name: mongodb-common-scripts name: common-scripts - configMap: defaultMode: 493 name: mongodb-scripts name: scripts - name: kube-api-access-gw66z projected: defaultMode: 420 sources: - serviceAccountToken: expirationSeconds: 3607 path: token - configMap: items: - key: ca.crt path: ca.crt name: kube-root-ca.crt - downwardAPI: items: - fieldRef: apiVersion: v1 fieldPath: metadata.namespace path: namespace status: conditions: - lastProbeTime: null lastTransitionTime: "2023-10-17T08:14:52Z" status: "True" type: Initialized - lastProbeTime: null lastTransitionTime: "2023-10-17T08:14:52Z" message: 'containers with unready status: [mongodb]' reason: ContainersNotReady status: "False" type: Ready - lastProbeTime: null lastTransitionTime: "2023-10-17T08:14:52Z" message: 'containers with unready status: [mongodb]' reason: ContainersNotReady status: "False" type: ContainersReady - lastProbeTime: null lastTransitionTime: "2023-10-17T08:14:52Z" status: "True" type: PodScheduled containerStatuses: - containerID: containerd://ffcf9051f157140e76333469b18d3b0256735b83de398481c5f4df08aaf1426c image: docker.io/bitnami/mongodb-exporter:0.39.0-debian-11-r123 imageID: docker.io/bitnami/mongodb-exporter@sha256:ae16c99d7673302fe80ba7784d5d5e5c446135956a656862bb913a59550c79ae lastState: {} name: metrics ready: true restartCount: 0 started: true state: running: startedAt: "2023-10-17T08:16:12Z" - containerID: containerd://b19b5ec5447ca828c132aed0e44ced842477996c26b8eb857dc096e892072fa8 image: docker.io/bitnami/mongodb:7.0.2-debian-11-r6 imageID: docker.io/bitnami/mongodb@sha256:15f9ca8df1d7c7dc7899807408a49ef8f5088b6bda8c9dd77e3b1d59c960627f lastState: terminated: containerID: containerd://b19b5ec5447ca828c132aed0e44ced842477996c26b8eb857dc096e892072fa8 exitCode: 1 finishedAt: "2023-10-17T08:16:59Z" reason: Error startedAt: "2023-10-17T08:16:59Z" name: mongodb ready: false restartCount: 3 started: false state: waiting: message: back-off 40s restarting failed container=mongodb pod=mongodb-0_mongodb(6fd96e03-174e-4f72-8530-07a34e663028) reason: CrashLoopBackOff hostIP: 10.0.5.130 phase: Running podIP: 10.0.5.6 podIPs: - ip: 10.0.5.6 qosClass: Burstable startTime: "2023-10-17T08:14:52Z" ```

robinpecha commented 11 months ago

@myysophia how did you solve it?

robinpecha commented 11 months ago

pod/containerSecurity disabled, pod starts with this output image

mongodb mongodb 08:45:42.71 INFO  ==> Advertised Hostname: mongodb-0.mongodb-headless.mongodb.svc.cluster.local
mongodb mongodb 08:45:42.71 INFO  ==> Advertised Port: 27017
mongodb realpath: /bitnami/mongodb/data/db: No such file or directory
mongodb mongodb 08:45:42.71 INFO  ==> Data dir empty, checking if the replica set already exists
mongodb MongoNetworkError: connect ECONNREFUSED 10.0.5.204:27017
mongodb mongodb 08:45:43.31 INFO  ==> Pod name matches initial primary pod name, configuring node as a primary
mongodb mongodb 08:45:43.33 
mongodb mongodb 08:45:43.33 Welcome to the Bitnami mongodb container
mongodb mongodb 08:45:43.33 Subscribe to project updates by watching https://github.com/bitnami/containers
mongodb mongodb 08:45:43.34 Submit issues and feature requests at https://github.com/bitnami/containers/issues
mongodb mongodb 08:45:43.34 
mongodb mongodb 08:45:43.34 INFO  ==> ** Starting MongoDB setup **
mongodb mongodb 08:45:43.36 INFO  ==> Validating settings in MONGODB_* env vars...
mongodb mongodb 08:45:43.41 WARN  ==> You set the environment variable ALLOW_EMPTY_PASSWORD=yes. For safety reasons, do not use this flag in a production environment.
mongodb mongodb 08:45:43.42 INFO  ==> Initializing MongoDB...
mongodb mongodb 08:45:43.49 INFO  ==> Deploying MongoDB from scratch...
mongodb MongoNetworkError: connect ECONNREFUSED 10.0.5.204:27017
mongodb mongodb 08:45:45.17 INFO  ==> Creating users...
mongodb mongodb 08:45:45.18 INFO  ==> Users created
mongodb mongodb 08:45:45.20 INFO  ==> Configuring MongoDB replica set...
mongodb mongodb 08:45:45.20 INFO  ==> Stopping MongoDB...
mongodb mongodb 08:45:49.32 INFO  ==> Configuring MongoDB primary node
mongodb mongodb 08:45:51.03 INFO  ==> Stopping MongoDB...
mongodb mongodb 08:45:52.06 INFO  ==> ** MongoDB setup finished! **
mongodb 
mongodb mongodb 08:45:52.08 INFO  ==> ** Starting MongoDB **
...
POD SUCCESSFULLY STARTS

image

mongodb mongodb 08:53:10.77 INFO  ==> Advertised Hostname: mongodb-0.mongodb-headless.mongodb.svc.cluster.local
mongodb mongodb 08:53:10.77 INFO  ==> Advertised Port: 27017
mongodb realpath: /bitnami/mongodb/data/db: No such file or directory
mongodb mongodb 08:53:10.77 INFO  ==> Data dir empty, checking if the replica set already exists
mongodb MongoNetworkError: connect ECONNREFUSED 10.0.5.226:27017
mongodb mongodb 08:53:11.37 INFO  ==> Pod name matches initial primary pod name, configuring node as a primary
mongodb mongodb 08:53:11.39 
mongodb mongodb 08:53:11.39 Welcome to the Bitnami mongodb container
mongodb mongodb 08:53:11.39 Subscribe to project updates by watching https://github.com/bitnami/containers
mongodb mongodb 08:53:11.39 Submit issues and feature requests at https://github.com/bitnami/containers/issues
mongodb mongodb 08:53:11.40 
mongodb mongodb 08:53:11.40 INFO  ==> ** Starting MongoDB setup **
mongodb mongodb 08:53:11.42 INFO  ==> Validating settings in MONGODB_* env vars...
mongodb mongodb 08:53:11.46 WARN  ==> You set the environment variable ALLOW_EMPTY_PASSWORD=yes. For safety reasons, do not use this flag in a production environment.
metrics level=info ts=2023-10-17T08:52:51.759Z caller=tls_config.go:195 msg="TLS is disabled." http2=false
...
POD START FAILS
robinpecha commented 11 months ago

Im able to successfully start the pod with following configuration: image But Im still not able to escalate privileges. image So please do you have any advice what Im doing wrong? Thank you.

robinpecha commented 11 months ago

Created new ticket from previous info, so this is duplicate: #20280