bitnami / charts

Bitnami Helm Charts
https://bitnami.com
Other
9.03k stars 9.22k forks source link

[bitnami/mysql] Secondary not used mysql-master-root-password and not configure master slave #30178

Open xujiongzi opened 1 week ago

xujiongzi commented 1 week ago

Name and Version

bitnami/mysql 11.1.19

What architecture are you using?

arm64

What steps will reproduce the bug?

helm install mysql ./values.yaml -n mysql --create-namespace

Are you using any custom parameters or values?

image:
  registry: docker.io
  repository: bitnami/mysql
  tag: 8.4.3-debian-12-r0
  digest: ""
  pullPolicy: IfNotPresent
  pullSecrets: []
  debug: false

architecture: replication

auth:
  rootPassword: "123456"
  createDatabase: false
  database: "my_database"
  username: ""
  password: ""
  replicationUser: replicator
  replicationPassword: "replicator"

primary:
  name: master
  command: []
  args: []
  lifecycleHooks: {}
  automountServiceAccountToken: false
  hostAliases: []
  enableMySQLX: true
  configuration: |-
    [mysqld]
    authentication_policy='{{- .Values.auth.authenticationPolicy | default "* ,," }}'
    skip-name-resolve
    explicit_defaults_for_timestamp
    basedir=/opt/bitnami/mysql
    plugin_dir=/opt/bitnami/mysql/lib/plugin
    port={{ .Values.primary.containerPorts.mysql }}
    mysqlx={{ ternary 1 0 .Values.primary.enableMySQLX }}
    mysqlx_port={{ .Values.primary.containerPorts.mysqlx }}
    socket=/opt/bitnami/mysql/tmp/mysql.sock
    datadir=/bitnami/mysql/data
    tmpdir=/opt/bitnami/mysql/tmp
    max_allowed_packet=16M
    bind-address=*
    pid-file=/opt/bitnami/mysql/tmp/mysqld.pid
    log-error=/opt/bitnami/mysql/logs/mysqld.log
    character-set-server=UTF8
    slow_query_log=0
    long_query_time=10.0

    [client]
    port={{ .Values.primary.containerPorts.mysql }}
    socket=/opt/bitnami/mysql/tmp/mysql.sock
    default-character-set=UTF8
    plugin_dir=/opt/bitnami/mysql/lib/plugin

    [manager]
    port={{ .Values.primary.containerPorts.mysql }}
    socket=/opt/bitnami/mysql/tmp/mysql.sock
    pid-file=/opt/bitnami/mysql/tmp/mysqld.pid

  containerPorts:
    mysql: 3306
    mysqlx: 33060

  nodeSelector: {"app-type": "master"}

  livenessProbe:
    enabled: true
    initialDelaySeconds: 45
    periodSeconds: 10
    timeoutSeconds: 1
    failureThreshold: 3
    successThreshold: 1

  readinessProbe:
    enabled: true
    initialDelaySeconds: 45
    periodSeconds: 10
    timeoutSeconds: 1
    failureThreshold: 3
    successThreshold: 1

  startupProbe:
    enabled: true
    initialDelaySeconds: 55
    periodSeconds: 10
    timeoutSeconds: 1
    failureThreshold: 10
    successThreshold: 1

  persistence:
    enabled: true
    storageClass: "nfs-storage"
    accessModes:
      - ReadWriteMany
    size: 8Gi

  service:
    type: NodePort
    ports:
      mysql: 3306
      mysqlx: 33060
    nodePorts:
      mysql: "30360"
      mysqlx: "30361"

secondary:
  name: slave
  replicaCount: 1
  enableMySQLX: true
  configuration: |-
    [mysqld]
    authentication_policy='{{- .Values.auth.authenticationPolicy | default "* ,," }}'
    skip-name-resolve
    explicit_defaults_for_timestamp
    basedir=/opt/bitnami/mysql
    plugin_dir=/opt/bitnami/mysql/lib/plugin
    port={{ .Values.secondary.containerPorts.mysql }}
    mysqlx={{ ternary 1 0 .Values.secondary.enableMySQLX }}
    mysqlx_port={{ .Values.secondary.containerPorts.mysqlx }}
    socket=/opt/bitnami/mysql/tmp/mysql.sock
    datadir=/bitnami/mysql/data
    tmpdir=/opt/bitnami/mysql/tmp
    max_allowed_packet=16M
    bind-address=0.0.0.0
    pid-file=/opt/bitnami/mysql/tmp/mysqld.pid
    log-error=/opt/bitnami/mysql/logs/mysqld.log
    character-set-server=UTF8
    slow_query_log=0
    long_query_time=10.0

    [client]
    port={{ .Values.secondary.containerPorts.mysql }}
    socket=/opt/bitnami/mysql/tmp/mysql.sock
    default-character-set=UTF8
    plugin_dir=/opt/bitnami/mysql/lib/plugin

    [manager]
    port={{ .Values.secondary.containerPorts.mysql }}
    socket=/opt/bitnami/mysql/tmp/mysql.sock
    pid-file=/opt/bitnami/mysql/tmp/mysqld.pid

  containerPorts:
    mysql: 3306
    mysqlx: 33060

  livenessProbe:
    enabled: true
    initialDelaySeconds: 45
    periodSeconds: 10
    timeoutSeconds: 1
    failureThreshold: 3
    successThreshold: 1

  readinessProbe:
    enabled: true
    initialDelaySeconds: 45
    periodSeconds: 10
    timeoutSeconds: 1
    failureThreshold: 3
    successThreshold: 1

  startupProbe:
    enabled: true
    initialDelaySeconds: 55
    periodSeconds: 10
    timeoutSeconds: 1
    failureThreshold: 15
    successThreshold: 1

  persistence:
    enabled: true
    storageClass: "nfs-storage"
    accessModes:
      - ReadWriteMany
    size: 8Gi

  service:
    type: NodePort
    ports:
      mysql: 3306
      mysqlx: 33060
    nodePorts:
      mysql: "30370"
      mysqlx: "30371"

What is the expected behavior?

  1. primary pod livenessProbe、readinessProbe、startupProbe success
  2. secondary pod livenessProbe、readinessProbe、startupProbe success
  3. The root user password for the primary is 123456
  4. The root user password for the secondary is 123456
  5. Master slave configuration completed

What do you see instead?

  1. primary pod livenessProbe、readinessProbe、startupProbe success

  2. secondary pod livenessProbe fail `

    [root@localhost ~]# kubectl describe pod mysql-slave-0 -n mysql 
    Name:             mysql-slave-0
    Namespace:        mysql
    Priority:         0
    Service Account:  mysql
    Node:             worker-2/192.168.239.130
    Start Time:       Sun, 03 Nov 2024 17:10:58 +0800
    Labels:           app.kubernetes.io/component=secondary
                  app.kubernetes.io/instance=mysql
                  app.kubernetes.io/managed-by=Helm
                  app.kubernetes.io/name=mysql
                  app.kubernetes.io/version=8.4.3
                  apps.kubernetes.io/pod-index=0
                  controller-revision-hash=mysql-slave-5d87bc65f
                  helm.sh/chart=mysql-11.1.19
                  statefulset.kubernetes.io/pod-name=mysql-slave-0
    Annotations:      checksum/configuration: ac4bc6fc465a72c6acc7817af2de86297665a56e6f7eb4aec6ced4a81be659c1
    Status:           Running
    IP:               10.42.2.89
    IPs:
    IP:           10.42.2.89
    Controlled By:  StatefulSet/mysql-slave
    Init Containers:
    preserve-logs-symlinks:
    Container ID:    docker://ded45a133f63959aa3f112a1b189ca4e494eae9f6ec0ab105add7fc3d3642d37
    Image:           docker.io/bitnami/mysql:8.4.3-debian-12-r0
    Image ID:        docker-pullable://bitnami/mysql@sha256:a78fa42d3af20c19bb295e473f5cfa231a7f9643072e602e6e3229285465e173
    Port:            <none>
    Host Port:       <none>
    SeccompProfile:  RuntimeDefault
    Command:
      /bin/bash
    Args:
      -ec
      #!/bin/bash
    
      . /opt/bitnami/scripts/libfs.sh
      # We copy the logs folder because it has symlinks to stdout and stderr
      if ! is_dir_empty /opt/bitnami/mysql/logs; then
        cp -r /opt/bitnami/mysql/logs /emptydir/app-logs-dir
      fi
    
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Sun, 03 Nov 2024 17:10:58 +0800
      Finished:     Sun, 03 Nov 2024 17:10:59 +0800
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:                750m
      ephemeral-storage:  2Gi
      memory:             768Mi
    Requests:
      cpu:                500m
      ephemeral-storage:  50Mi
      memory:             512Mi
    Environment:          <none>
    Mounts:
      /emptydir from empty-dir (rw)
    Containers:
    mysql:
    Container ID:    docker://868a3115e1b0244482adfd0c18990185213b7e0fce4080f195153db8063fdec5
    Image:           docker.io/bitnami/mysql:8.4.3-debian-12-r0
    Image ID:        docker-pullable://bitnami/mysql@sha256:a78fa42d3af20c19bb295e473f5cfa231a7f9643072e602e6e3229285465e173
    Ports:           3306/TCP, 33060/TCP
    Host Ports:      0/TCP, 0/TCP
    SeccompProfile:  RuntimeDefault
    State:           Running
      Started:       Sun, 03 Nov 2024 17:10:59 +0800
    Ready:           False
    Restart Count:   0
    Limits:
      cpu:                750m
      ephemeral-storage:  2Gi
      memory:             768Mi
    Requests:
      cpu:                500m
      ephemeral-storage:  50Mi
      memory:             512Mi
    Liveness:             exec [/bin/bash -ec password_aux="${MYSQL_MASTER_ROOT_PASSWORD:-}"
    if [[ -f "${MYSQL_MASTER_ROOT_PASSWORD_FILE:-}" ]]; then
    password_aux=$(cat "$MYSQL_MASTER_ROOT_PASSWORD_FILE")
    fi
    mysqladmin status -uroot -p"${password_aux}"
    ] delay=45s timeout=1s period=10s #success=1 #failure=3
    Readiness:  exec [/bin/bash -ec password_aux="${MYSQL_MASTER_ROOT_PASSWORD:-}"
    if [[ -f "${MYSQL_MASTER_ROOT_PASSWORD_FILE:-}" ]]; then
    password_aux=$(cat "$MYSQL_MASTER_ROOT_PASSWORD_FILE")
    fi
    mysqladmin ping -uroot -p"${password_aux}" | grep "mysqld is alive"
    ] delay=45s timeout=1s period=10s #success=1 #failure=3
    Startup:  exec [/bin/bash -ec password_aux="${MYSQL_MASTER_ROOT_PASSWORD:-}"
    if [[ -f "${MYSQL_MASTER_ROOT_PASSWORD_FILE:-}" ]]; then
    password_aux=$(cat "$MYSQL_MASTER_ROOT_PASSWORD_FILE")
    fi
    mysqladmin ping -uroot -p"${password_aux}" | grep "mysqld is alive"
    ] delay=55s timeout=1s period=10s #success=1 #failure=15
    Environment:
      BITNAMI_DEBUG:               false
      MYSQL_REPLICATION_MODE:      slave
      MYSQL_MASTER_HOST:           mysql-master
      MYSQL_MASTER_PORT_NUMBER:    3306
      MYSQL_MASTER_ROOT_USER:      root
      MYSQL_PORT:                  3306
      MYSQL_REPLICATION_USER:      replicator
      MYSQL_MASTER_ROOT_PASSWORD:  <set to the key 'mysql-root-password' in secret 'mysql'>         Optional: false
      MYSQL_REPLICATION_PASSWORD:  <set to the key 'mysql-replication-password' in secret 'mysql'>  Optional: false
    Mounts:
      /bitnami/mysql from data (rw)
      /opt/bitnami/mysql/conf from empty-dir (rw,path="app-conf-dir")
      /opt/bitnami/mysql/conf/my.cnf from config (rw,path="my.cnf")
      /opt/bitnami/mysql/logs from empty-dir (rw,path="app-logs-dir")
      /opt/bitnami/mysql/tmp from empty-dir (rw,path="app-tmp-dir")
      /tmp from empty-dir (rw,path="tmp-dir")
    Conditions:
    Type                        Status
    PodReadyToStartContainers   True 
    Initialized                 True 
    Ready                       False 
    ContainersReady             False 
    PodScheduled                True 
    Volumes:
    data:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  data-mysql-slave-0
    ReadOnly:   false
    config:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      mysql-slave
    Optional:  false
    empty-dir:
    Type:        EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:      
    SizeLimit:   <unset>
    QoS Class:       Burstable
    Node-Selectors:  <none>
    Tolerations:     node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                 node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
    Events:
    Type     Reason            Age               From               Message
    ----     ------            ----              ----               -------
    Warning  FailedScheduling  2m15s             default-scheduler  0/3 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling.
    Normal   Scheduled         2m14s             default-scheduler  Successfully assigned mysql/mysql-slave-0 to worker-2
    Normal   Pulled            2m14s             kubelet            Container image "docker.io/bitnami/mysql:8.4.3-debian-12-r0" already present on machine
    Normal   Created           2m14s             kubelet            Created container preserve-logs-symlinks
    Normal   Started           2m13s             kubelet            Started container preserve-logs-symlinks
    Normal   Pulled            2m13s             kubelet            Container image "docker.io/bitnami/mysql:8.4.3-debian-12-r0" already present on machine
    Normal   Created           2m13s             kubelet            Created container mysql
    Normal   Started           2m13s             kubelet            Started container mysql
    Warning  Unhealthy         4s (x8 over 74s)  kubelet            Startup probe failed: mysqladmin: [Warning] Using a password on the command line interface can be insecure.
    mysqladmin: connect to server at 'localhost' failed
    error: 'Access denied for user 'root'@'localhost' (using password: YES)'

    `

  3. When I modify password_aux="" in templates/secondary/statefulset.yaml,secondary pod livenessProbe、readinessProbe、startupProbe can be success, and secondary pod start success (I don't know why he didn't use the primary root password)

    
          livenessProbe: {{- include "common.tplvalues.render" (dict "value" (omit .Values.secondary.livenessProbe "enabled") "context" $) | nindent 12 }}
            exec:
              command:
                - /bin/bash
                - -ec
                - |
                  password_aux=""
                  if [[ -f "${MYSQL_MASTER_ROOT_PASSWORD_FILE:-}" ]]; then
                      password_aux=$(cat "$MYSQL_MASTER_ROOT_PASSWORD_FILE")
                  fi
                  mysqladmin status -uroot -p"${password_aux}"
    
          readinessProbe: {{- include "common.tplvalues.render" (dict "value" (omit .Values.secondary.readinessProbe "enabled") "context" $) | nindent 12 }}
            exec:
              command:
                - /bin/bash
                - -ec
                - |
                  password_aux=""
                  if [[ -f "${MYSQL_MASTER_ROOT_PASSWORD_FILE:-}" ]]; then
                      password_aux=$(cat "$MYSQL_MASTER_ROOT_PASSWORD_FILE")
                  fi
                  mysqladmin ping -uroot -p"${password_aux}" | grep "mysqld is alive"
    
          startupProbe: {{- include "common.tplvalues.render" (dict "value" (omit .Values.secondary.startupProbe "enabled") "context" $) | nindent 12 }}
            exec:
              command:
                - /bin/bash
                - -ec
                - |
                  password_aux=""
                  if [[ -f "${MYSQL_MASTER_ROOT_PASSWORD_FILE:-}" ]]; then
                      password_aux=$(cat "$MYSQL_MASTER_ROOT_PASSWORD_FILE")
                  fi
                  mysqladmin ping -uroot -p"${password_aux}" | grep "mysqld is alive"

1. Then I entered secondary pod ,  and use "mysql -u root -p" login to slave mysql(Use an empty password),But he didn't activate the master-slave configuration and the root user is disappear also???

[root@server mysql]# kubectl exec -it mysql-slave-0 -n mysql /bin/bash kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead. Defaulted container "mysql" out of: mysql, preserve-logs-symlinks (init) I have no name!@mysql-slave-0:/$ mysql -u root -p Enter password: Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 94 Server version: 8.4.3 Source distribution

Copyright (c) 2000, 2024, Oracle and/or its affiliates.

Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> use mysql; Reading table information for completion of table and column names You can turn off this feature to get a quicker startup with -A

Database changed mysql> select host,user from user; +-----------+------------------+ | host | user | +-----------+------------------+ | localhost | mysql.infoschema | | localhost | mysql.session | | localhost | mysql.sys | +-----------+------------------+ 3 rows in set (0.00 sec)

mysql> show replica status\G; Empty set (0.01 sec)

ERROR: No query specified


However, strangely enough, even when there was no root user in my secondary pod, I was still able to log in to MySQL using an empty root password within the pod

But secondary nodes cannot be connected through nodePort, while primary nodes can. Even if I set the bind address of the secondary node to 0.0.0.0, I cannot connect to it (regardless of whether I set the bind address in values.YAML or not, the bind address of the secondary node will always be 127.0.0.1 after the first startup)

### Additional information

[root@server nfs-provisioner]# cat /etc/centos-release CentOS Stream release 9

[root@server mysql]# helm version version.BuildInfo{Version:"v3.16.2", GitCommit:"13654a52f7c70a143b1dd51416d633e1071faffb", GitTreeState:"clean", GoVersion:"go1.22.7"}

[root@server mysql]# k3s --version k3s version v1.30.5+k3s1 (9b586704) go version go1.22.6

apiVersion: apps/v1 kind: Deployment metadata: name: nfs-client-provisioner labels: app: nfs-client-provisioner namespace: kube-system spec: replicas: 1 selector: matchLabels: app: nfs-client-provisioner strategy: type: Recreate template: metadata: labels: app: nfs-client-provisioner spec: serviceAccountName: nfs-provisioner nodeSelector: node-type: server containers:


apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: nfs-storage provisioner: nfs-provisioner parameters:
archiveOnDelete: "true" reclaimPolicy: Delete allowVolumeExpansion: true volumeBindingMode: Immediate mountOptions:

dgomezleon commented 1 week ago

Hi @xujiongzi ,

I was not able to reproduce the issue using the values below (there are the values you provided just removing persistence and and node-selector):

image:
  registry: docker.io
  repository: bitnami/mysql
  tag: 8.4.3-debian-12-r0
  digest: ""
  pullPolicy: IfNotPresent
  pullSecrets: []
  debug: false

architecture: replication

auth:
  rootPassword: "123456"
  createDatabase: false
  database: "my_database"
  username: ""
  password: ""
  replicationUser: replicator
  replicationPassword: "replicator"

primary:
  name: master
  command: []
  args: []
  lifecycleHooks: {}
  automountServiceAccountToken: false
  hostAliases: []
  enableMySQLX: true
  configuration: |-
    [mysqld]
    authentication_policy='{{- .Values.auth.authenticationPolicy | default "* ,," }}'
    skip-name-resolve
    explicit_defaults_for_timestamp
    basedir=/opt/bitnami/mysql
    plugin_dir=/opt/bitnami/mysql/lib/plugin
    port={{ .Values.primary.containerPorts.mysql }}
    mysqlx={{ ternary 1 0 .Values.primary.enableMySQLX }}
    mysqlx_port={{ .Values.primary.containerPorts.mysqlx }}
    socket=/opt/bitnami/mysql/tmp/mysql.sock
    datadir=/bitnami/mysql/data
    tmpdir=/opt/bitnami/mysql/tmp
    max_allowed_packet=16M
    bind-address=*
    pid-file=/opt/bitnami/mysql/tmp/mysqld.pid
    log-error=/opt/bitnami/mysql/logs/mysqld.log
    character-set-server=UTF8
    slow_query_log=0
    long_query_time=10.0

    [client]
    port={{ .Values.primary.containerPorts.mysql }}
    socket=/opt/bitnami/mysql/tmp/mysql.sock
    default-character-set=UTF8
    plugin_dir=/opt/bitnami/mysql/lib/plugin

    [manager]
    port={{ .Values.primary.containerPorts.mysql }}
    socket=/opt/bitnami/mysql/tmp/mysql.sock
    pid-file=/opt/bitnami/mysql/tmp/mysqld.pid

  containerPorts:
    mysql: 3306
    mysqlx: 33060

  livenessProbe:
    enabled: true
    initialDelaySeconds: 45
    periodSeconds: 10
    timeoutSeconds: 1
    failureThreshold: 3
    successThreshold: 1

  readinessProbe:
    enabled: true
    initialDelaySeconds: 45
    periodSeconds: 10
    timeoutSeconds: 1
    failureThreshold: 3
    successThreshold: 1

  startupProbe:
    enabled: true
    initialDelaySeconds: 55
    periodSeconds: 10
    timeoutSeconds: 1
    failureThreshold: 10
    successThreshold: 1

  service:
    type: NodePort
    ports:
      mysql: 3306
      mysqlx: 33060
    nodePorts:
      mysql: "30360"
      mysqlx: "30361"

secondary:
  name: slave
  replicaCount: 1
  enableMySQLX: true
  configuration: |-
    [mysqld]
    authentication_policy='{{- .Values.auth.authenticationPolicy | default "* ,," }}'
    skip-name-resolve
    explicit_defaults_for_timestamp
    basedir=/opt/bitnami/mysql
    plugin_dir=/opt/bitnami/mysql/lib/plugin
    port={{ .Values.secondary.containerPorts.mysql }}
    mysqlx={{ ternary 1 0 .Values.secondary.enableMySQLX }}
    mysqlx_port={{ .Values.secondary.containerPorts.mysqlx }}
    socket=/opt/bitnami/mysql/tmp/mysql.sock
    datadir=/bitnami/mysql/data
    tmpdir=/opt/bitnami/mysql/tmp
    max_allowed_packet=16M
    bind-address=0.0.0.0
    pid-file=/opt/bitnami/mysql/tmp/mysqld.pid
    log-error=/opt/bitnami/mysql/logs/mysqld.log
    character-set-server=UTF8
    slow_query_log=0
    long_query_time=10.0

    [client]
    port={{ .Values.secondary.containerPorts.mysql }}
    socket=/opt/bitnami/mysql/tmp/mysql.sock
    default-character-set=UTF8
    plugin_dir=/opt/bitnami/mysql/lib/plugin

    [manager]
    port={{ .Values.secondary.containerPorts.mysql }}
    socket=/opt/bitnami/mysql/tmp/mysql.sock
    pid-file=/opt/bitnami/mysql/tmp/mysqld.pid

  containerPorts:
    mysql: 3306
    mysqlx: 33060

  livenessProbe:
    enabled: true
    initialDelaySeconds: 45
    periodSeconds: 10
    timeoutSeconds: 1
    failureThreshold: 3
    successThreshold: 1

  readinessProbe:
    enabled: true
    initialDelaySeconds: 45
    periodSeconds: 10
    timeoutSeconds: 1
    failureThreshold: 3
    successThreshold: 1

  startupProbe:
    enabled: true
    initialDelaySeconds: 55
    periodSeconds: 10
    timeoutSeconds: 1
    failureThreshold: 15
    successThreshold: 1

  service:
    type: NodePort
    ports:
      mysql: 3306
      mysqlx: 33060
    nodePorts:
      mysql: "30370"
      mysqlx: "30371"

The secondary pod was successfully initiated, and I could access it with mysql -u root -p123456.

Could you double-check that you do not have leftover PVCs from a previous installation?

xujiongzi commented 1 week ago

Hi @xujiongzi ,

I was not able to reproduce the issue using the values below (there are the values you provided just removing persistence and and node-selector):

image:
  registry: docker.io
  repository: bitnami/mysql
  tag: 8.4.3-debian-12-r0
  digest: ""
  pullPolicy: IfNotPresent
  pullSecrets: []
  debug: false

architecture: replication

auth:
  rootPassword: "123456"
  createDatabase: false
  database: "my_database"
  username: ""
  password: ""
  replicationUser: replicator
  replicationPassword: "replicator"

primary:
  name: master
  command: []
  args: []
  lifecycleHooks: {}
  automountServiceAccountToken: false
  hostAliases: []
  enableMySQLX: true
  configuration: |-
    [mysqld]
    authentication_policy='{{- .Values.auth.authenticationPolicy | default "* ,," }}'
    skip-name-resolve
    explicit_defaults_for_timestamp
    basedir=/opt/bitnami/mysql
    plugin_dir=/opt/bitnami/mysql/lib/plugin
    port={{ .Values.primary.containerPorts.mysql }}
    mysqlx={{ ternary 1 0 .Values.primary.enableMySQLX }}
    mysqlx_port={{ .Values.primary.containerPorts.mysqlx }}
    socket=/opt/bitnami/mysql/tmp/mysql.sock
    datadir=/bitnami/mysql/data
    tmpdir=/opt/bitnami/mysql/tmp
    max_allowed_packet=16M
    bind-address=*
    pid-file=/opt/bitnami/mysql/tmp/mysqld.pid
    log-error=/opt/bitnami/mysql/logs/mysqld.log
    character-set-server=UTF8
    slow_query_log=0
    long_query_time=10.0

    [client]
    port={{ .Values.primary.containerPorts.mysql }}
    socket=/opt/bitnami/mysql/tmp/mysql.sock
    default-character-set=UTF8
    plugin_dir=/opt/bitnami/mysql/lib/plugin

    [manager]
    port={{ .Values.primary.containerPorts.mysql }}
    socket=/opt/bitnami/mysql/tmp/mysql.sock
    pid-file=/opt/bitnami/mysql/tmp/mysqld.pid

  containerPorts:
    mysql: 3306
    mysqlx: 33060

  livenessProbe:
    enabled: true
    initialDelaySeconds: 45
    periodSeconds: 10
    timeoutSeconds: 1
    failureThreshold: 3
    successThreshold: 1

  readinessProbe:
    enabled: true
    initialDelaySeconds: 45
    periodSeconds: 10
    timeoutSeconds: 1
    failureThreshold: 3
    successThreshold: 1

  startupProbe:
    enabled: true
    initialDelaySeconds: 55
    periodSeconds: 10
    timeoutSeconds: 1
    failureThreshold: 10
    successThreshold: 1

  service:
    type: NodePort
    ports:
      mysql: 3306
      mysqlx: 33060
    nodePorts:
      mysql: "30360"
      mysqlx: "30361"

secondary:
  name: slave
  replicaCount: 1
  enableMySQLX: true
  configuration: |-
    [mysqld]
    authentication_policy='{{- .Values.auth.authenticationPolicy | default "* ,," }}'
    skip-name-resolve
    explicit_defaults_for_timestamp
    basedir=/opt/bitnami/mysql
    plugin_dir=/opt/bitnami/mysql/lib/plugin
    port={{ .Values.secondary.containerPorts.mysql }}
    mysqlx={{ ternary 1 0 .Values.secondary.enableMySQLX }}
    mysqlx_port={{ .Values.secondary.containerPorts.mysqlx }}
    socket=/opt/bitnami/mysql/tmp/mysql.sock
    datadir=/bitnami/mysql/data
    tmpdir=/opt/bitnami/mysql/tmp
    max_allowed_packet=16M
    bind-address=0.0.0.0
    pid-file=/opt/bitnami/mysql/tmp/mysqld.pid
    log-error=/opt/bitnami/mysql/logs/mysqld.log
    character-set-server=UTF8
    slow_query_log=0
    long_query_time=10.0

    [client]
    port={{ .Values.secondary.containerPorts.mysql }}
    socket=/opt/bitnami/mysql/tmp/mysql.sock
    default-character-set=UTF8
    plugin_dir=/opt/bitnami/mysql/lib/plugin

    [manager]
    port={{ .Values.secondary.containerPorts.mysql }}
    socket=/opt/bitnami/mysql/tmp/mysql.sock
    pid-file=/opt/bitnami/mysql/tmp/mysqld.pid

  containerPorts:
    mysql: 3306
    mysqlx: 33060

  livenessProbe:
    enabled: true
    initialDelaySeconds: 45
    periodSeconds: 10
    timeoutSeconds: 1
    failureThreshold: 3
    successThreshold: 1

  readinessProbe:
    enabled: true
    initialDelaySeconds: 45
    periodSeconds: 10
    timeoutSeconds: 1
    failureThreshold: 3
    successThreshold: 1

  startupProbe:
    enabled: true
    initialDelaySeconds: 55
    periodSeconds: 10
    timeoutSeconds: 1
    failureThreshold: 15
    successThreshold: 1

  service:
    type: NodePort
    ports:
      mysql: 3306
      mysqlx: 33060
    nodePorts:
      mysql: "30370"
      mysqlx: "30371"

The secondary pod was successfully initiated, and I could access it with mysql -u root -p123456.

Could you double-check that you do not have leftover PVCs from a previous installation?

No, every time I try again, I will Kubetl delete ns MySQL to ensure that PVC and PV are deleted, and I will delete persistent files on NFS server. Perhaps you should try using the same persistence as me? Before the issues I asked, I had encountered other problems caused by NFS, such as primary and secondary startup prompts like "I/O cannot be written, xxx bytes need to be written but only 0 bytes have been written, please check if the disk is full". It was only after I replaced the NFS provider image that this problem was resolved.

dgomezleon commented 1 week ago

Hi @xujiongzi ,

It is true that I remember some permission issues with NFS in the past. However, I have tried the following without any issues:

  1. Launch a cluster (in this case GKE).
  2. Install NFS provider:
    $ helm install --name nfs-server-provisioner stable/nfs-server-provisioner --set persistence.enabled=true,persistence.size=10Gi
  3. Launch it with my previous values adding
...
  persistence:
    enabled: true
    storageClass: "nfs"
    accessModes:
      - ReadWriteMany
    size: 8Gi
...
xujiongzi commented 1 week ago

Hi @dgomezleon , I don't know why I'm the only one who can't succeed, but I see that you seem to have installed stable/nfs-server-provisioner as an NFS server, while my NFS server is installed using yum install nfs-utils rpcbind on each node before using chainguard/nfs-subdir-external-provisioner as my NFS client (this is how I understand my installation, I don't know if it's correct?) Do you have any other good troubleshooting suggestions? At present, I can't think of how to troubleshoot, even though I used different versions of charts

dgomezleon commented 1 week ago

Hi @xujiongzi

The issue may not be directly related to the Bitnami Helm chart, but rather to how the application is being configured in your specific environment, or tied to a particular scenario that is not easy to reproduce on our side.

With that said, we'll keep this ticket open until the stale bot automatically closes it, in case someone from the community contributes valuable insights.