Closed dommgifer closed 4 years ago
Hi, I am sorry but I am not able to reproduce your issue. This is what I have done in order to try to reproduce your scenario:
values.yaml
in a file with the same name in my current directory.helm install mariadb-galera -f values.yaml bitnami/mariadb-galera
mariadb-galera-0
and I didn't found anything weird. After that, I accessed this pod and checked the connection with the database:
▶ kubectl get pods
NAME READY STATUS RESTARTS AGE
mariadb-galera-0 1/1 Running 0 3m46s
mariadb-galera-1 1/1 Running 0 2m37s
mariadb-galera-2 1/1 Running 0 95s
▶ kubectl exec -it mariadb-galera-0 /bin/bash root@mariadb-galera-0:/# mysql -u root -p$MARIADB_ROOT_PASSWORD Welcome to the MariaDB monitor. Commands end with ; or \g. Your MariaDB connection id is 46 Server version: 10.4.11-MariaDB-log Source distribution
Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
MariaDB [(none)]> show databases; +--------------------+ | Database | +--------------------+ | frontend | | information_schema | | mysql | | performance_schema | | test | +--------------------+ 5 rows in set (0.030 sec)
MariaDB [(none)]>
I did the installation from scratch, please take into account that `helm delete` doesn't remove the PVCs, so maybe if you did a previous installation in the past using another password (or the randomly generated one), the old PVC is mounted into the new deployment and there is any inconsistency between the new and old password. If that is the case, please try again removing the PVC. Example of this:
▶ helm delete mariadb-galera release "mariadb-galera" uninstalled
▶ kubectl get pods No resources found.
▶ kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE data-mariadb-galera-0 Bound pvc-4343205f-2b9f-11ea-b733-42010a9600a6 8Gi RWO standard 10m data-mariadb-galera-1 Bound pvc-6c79fd69-2b9f-11ea-b733-42010a9600a6 8Gi RWO standard 9m22s data-mariadb-galera-2 Bound pvc-91b3502f-2b9f-11ea-b733-42010a9600a6 8Gi RWO standard 8m19s
@dommgifer I also ran into the same issue. So I ran helm install ...
in a couple different permutations. My values.yaml
is directly from https://github.com/bitnami/charts/blob/master/bitnami/mariadb-galera/values.yaml with no changes.
command | result helm install --name mariadb-galera-fv -f values.yaml bitnami/mariadb-galera --version 0.6.1 | worked helm install --name mariadb-galera-f -f values.yaml bitnami/mariadb-galera | worked helm install --name mariadb-galera-v bitnami/mariadb-galera --version 0.6.1 | worked helm install --name mariadb-galera bitnami/mariadb-galera | worked
BUT when I install via this command:
export RELEASE_NAME="mariadb-galera-cluster"
helm install --name ${RELEASE_NAME} -f values.yaml bitnami/mariadb-galera --version 0.6.1
...the install works as expected.
However, if I set RELEASE_NAME to mariadb-galera
, the install fails.
So I re-ran
helm install --name mariadb-galera-fv -f values.yaml bitnami/mariadb-galera --version 0.6.1
... and it failed as well. I got a wild idea: what about the PVC's? So I deleted all of them...
kubectl get pvc | awk '$1 {print$1}' | while read vol; do kubectl delete pvc/${vol}; done
AND it worked. So I deleted the helm depolyment and re-launched it. Failed.
Finds
Versions
I always delete pvc before reinstall mariadb-galera. But still not worked.
So, I try to install mariadb-galera with :
persistence:
enable: false
And it worked.
@carrodher @davidjeddy What kind of back-end storage on your environment?
My back-end storage was NFS.
It is weird, I mean, the source of the issue is that the old password is stored anywhere and the new deployment is trying to use the old password instead of the new one, but this issue should disappear if the PVCs are deleted.
In my case, I have the cluster in Google using the disk provided by them, so it is not NFS for sure.
the issue is that the new password is not mounted for postgres because postgres has the PVC which already has a config directory. therefore bitnami doesnt look to reload the config on helm upgrade
if persistence is enabled.
to solve this issue you either persist the secret or you clear the postgres config directory so a new secret can be mounted. you could at the very beginning set your postgresqlPassword
so it never changes across upgrades. as well as disabling persistence
deleting the PVCs every time is impractical, as its a feature of helm. it is also against helm upgrade
requirements as all youre doing is updating the charts.
Edit: its actually a known thing and is in the comments of the values.yaml
it would be great to make the user aware of these issues in the README
@dommgifer I am using minikube with k8s 1.13 on an Ubuntu 18.04 host. The storage class is the standard storage.
@ekhaydarov Excellent insight, thank you. That helps to explain the situation very nicely.
This Issue has been automatically marked as "stale" because it has not had recent activity (for 15 days). It will be closed if no further activity occurs. Thanks for the feedback.
Due to the lack of activity in the last 5 days since it was marked as "stale", we proceed to close this Issue. Do not hesitate to reopen it later if necessary.
/reopen
I have found a way to workarouind this. Basically you do a brute force attack -- setting the password yourself. You will need to connect to the TTY of the main MariaDB pod, and type the following commands:
$ mysql -u root <enter>
If done right you will be greeted with MariaDB welcome message. You can hop around, use SHOW database
and stuff.
This indicates your password actually aren't being set by the initial launch script on Bitnami side. It's clearly an oversight to be fixed, I'll see what to do later.
Then, as we have obtained the root privilege, we can add alter the privilege information as we do in textbook, execute the following SQL commands:
ALTER USER 'root'@'localhost' IDENTIFIED BY '${rootUser.password}';
ALTER USER 'root'@'127.0.0.1' IDENTIFIED BY '${rootUser.password}';
Replace ${rootUser.password}
to the actual password you set/generated from the Helm chart. Everything should go back to normal since then. It's so stupid that Bitnami couldn't even do a proper check whenever they push a change albeit this kind of low level error could still present. I see that they commit so frequently and so does their Galera charts.
But coming to think about this, I think MariaDB prefer us NOT to use native socket with local administration, and actually prefers using UDS instead. So I think this is why I can connect to the server successfully without any password set. See https://superuser.com/a/1103735 for more. SUGGESTION 1: Disable/Do not use UDS auth plugin from the upstream Bitnami MariaDB Docker image. This could be done here. SUGGESTION 2: Add the two lines I suggested on script of the permalink also mentioned above.
At the end the issue is that with the default options, the random password is stored in the PV then when you uninstall the deployment with helm delete
the PV/PVCs are not deleted. So the second installation generates a new random password different from the first one that is still present in the PV so there is a conflict between the new password and the previous one.
This is not a specific issue related to the MariaDB chart; we are thinking of a solution/clarification for this topic.
Hi,
Im having the same issue it looks like somehow installation of mariadb in a different namespace is messing with me. Current installation in namespace: stg name: stg-mariadb-galera
New installation namespace: dev name: dev-mariadb-galera
for both im using storageClass: "rook-ceph-block"
The New installation failed when i used the name dev-mariadb-galera, if i used a different name like dev-xxx it works, if i turn off persistancy it works, if i change root force password to false it works.? My guess would be its looking for something in the pv or somewhere that is related to the name mariadb-galera.
Note that i didnt have any of these issues with my first installation in namespace: stg
I really like keeping my naming convention any help would be much appreciated
Looks like adding this from here extraEnvVars:
Hi @mn0o7, so adding that variable solved the issue? Isn't related to the PVs, right?
Hi @andresbono My issue is solved but i cant really say what this is related to =
I can't see the relation between the MARIADB_INIT_SLEEP_TIME
env. variable and the issue, so maybe what makes the difference was the restart or reapply the values. Anycase, we're glad that you were able to fix the issue
mariadb 15:28:16.23
mariadb 15:28:16.23 Welcome to the Bitnami mariadb-galera container
mariadb 15:28:16.23 Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-mariadb-galera
mariadb 15:28:16.24 Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-mariadb-galera/issues
mariadb 15:28:16.24
mariadb 15:28:16.24 INFO > ** Starting MariaDB setup **
mariadb 15:28:16.26 INFO > Validating settings in MYSQL_*/MARIADB_* env vars
mariadb 15:28:16.34 INFO > Initializing mariadb database
mariadb 15:28:16.36 WARN > The mariadb configuration file '/opt/bitnami/mariadb/conf/my.cnf' is not writable or does not exist. Configurations based on environment variables will not be applied for this file.
mariadb 15:28:16.36 INFO > Persisted data detected. Restoring
mariadb 15:28:16.40 INFO > ** MariaDB setup finished! **
mariadb 15:28:16.46 INFO > ** Starting MariaDB **
mariadb 15:28:16.46 INFO > Setting previous boot
2020-08-04 15:28:16 0 [Note] /opt/bitnami/mariadb/sbin/mysqld (mysqld 10.4.13-MariaDB-log) starting as process 1 ...
2020-08-04 15:28:16 0 [Note] WSREP: Loading provider /opt/bitnami/mariadb/lib/libgalera_smm.so initial position: 00000000-0000-0000-0000-000000000000:-1
wsrep loader: [INFO] wsrep_load(): loading provider library '/opt/bitnami/mariadb/lib/libgalera_smm.so'
wsrep loader: [INFO] wsrep_load(): Galera 4.5(r0) by Codership Oy <info@codership.com> loaded successfully.
2020-08-04 15:28:16 0 [Note] WSREP: CRC-32C: using hardware acceleration.
2020-08-04 15:28:16 0 [Note] WSREP: Found saved state: 6cf9ed1b-d54b-11ea-8cf7-5fde2825032b:1731, safe_to_bootstrap: 0
2020-08-04 15:28:16 0 [Note] WSREP: GCache DEBUG: opened preamble:
Version: 2
UUID: 6cf9ed1b-d54b-11ea-8cf7-5fde2825032b
Seqno: 1402 - 1731
Offset: 1280
Synced: 1
2020-08-04 15:28:16 0 [Note] WSREP: Recovering GCache ring buffer: version: 2, UUID: 6cf9ed1b-d54b-11ea-8cf7-5fde2825032b, offset: 1280
2020-08-04 15:28:16 0 [Note] WSREP: GCache::RingBuffer initial scan... 0.0% ( 0/134217752 bytes) complete.
2020-08-04 15:28:16 0 [Note] WSREP: GCache::RingBuffer initial scan...100.0% (134217752/134217752 bytes) complete.
2020-08-04 15:28:16 0 [Note] WSREP: Recovering GCache ring buffer: found gapless sequence 1402-1731
2020-08-04 15:28:16 0 [Note] WSREP: GCache::RingBuffer unused buffers scan... 0.0% ( 0/134202624 bytes) complete.
2020-08-04 15:28:16 0 [Note] WSREP: GCache::RingBuffer unused buffers scan...100.0% (134202624/134202624 bytes) complete.
2020-08-04 15:28:16 0 [Note] WSREP: GCache DEBUG: RingBuffer::recover(): found 37/367 locked buffers
2020-08-04 15:28:16 0 [Note] WSREP: GCache DEBUG: RingBuffer::recover(): free space: 23912/134217728
2020-08-04 15:28:16 0 [Note] WSREP: Passing config to GCS: base_dir /bitnami/mariadb/data/; base_host 100.124.161.47; base_port 4567; cert.log_conflicts no; cert.optimistic_pa yes; debug no; evs.auto_evict 0; evs.delay_margin PT1S; evs.delayed_keep_period PT30S; evs.inactive_check_period PT0.5S; evs.inactive_timeout PT15S; evs.join_retrans_period PT1S; evs.max_install_timeouts 3; evs.send_window 4; evs.stats_report_period PT1M; evs.suspect_timeout PT5S; evs.user_send_window 2; evs.view_forget_timeout PT24H; gcache.dir /bitnami/mariadb/data/; gcache.keep_pages_size 0; gcache.mem_size 0; gcache.name galera.cache; gcache.page_size 128M; gcache.recover yes; gcache.size 128M; gcomm.thread_prio ; gcs.fc_debug 0; gcs.fc_factor 1.0; gcs.fc_limit 16; gcs.fc_master_slave no; gcs.max_packet_size 64500; gcs.max_throttle 0.25; gcs.recv_q_hard_limit 9223372036854775807; gcs.recv_q_soft_limit 0.25; gcs.sync_donor no; gmcast.segment 0; gmcast.version 0; pc.announce_timeout P...
2020-08-04 15:28:16 0 [Note] WSREP: Service thread queue flushed.
2020-08-04 15:28:16 0 [Note] WSREP: ####### Assign initial position for certification: 6cf9ed1b-d54b-11ea-8cf7-5fde2825032b:1731, protocol version: -1
2020-08-04 15:28:16 0 [Note] WSREP: Start replication
2020-08-04 15:28:16 0 [Note] WSREP: Connecting with bootstrap option: 0
2020-08-04 15:28:16 0 [Note] WSREP: Setting GCS initial position to 6cf9ed1b-d54b-11ea-8cf7-5fde2825032b:1731
2020-08-04 15:28:16 0 [Note] WSREP: protonet asio version 0
2020-08-04 15:28:16 0 [Note] WSREP: Using CRC-32C for message checksums.
2020-08-04 15:28:16 0 [Note] WSREP: backend: asio
2020-08-04 15:28:16 0 [Note] WSREP: gcomm thread scheduling priority set to other:0
2020-08-04 15:28:16 0 [Warning] WSREP: access file(/bitnami/mariadb/data//gvwstate.dat) failed(No such file or directory)
2020-08-04 15:28:16 0 [Note] WSREP: restore pc from disk failed
2020-08-04 15:28:16 0 [Note] WSREP: GMCast version 0
2020-08-04 15:28:16 0 [Note] WSREP: (1e4b81b8-bbba, 'tcp://0.0.0.0:4567') listening at tcp://0.0.0.0:4567
2020-08-04 15:28:16 0 [Note] WSREP: (1e4b81b8-bbba, 'tcp://0.0.0.0:4567') multicast: , ttl: 1
2020-08-04 15:28:16 0 [Note] WSREP: EVS version 1
2020-08-04 15:28:16 0 [Note] WSREP: gcomm: connecting to group 'galera', peer 'stg-mariadb-galera-headless.stg.svc.cluster.local:'
2020-08-04 15:28:16 0 [Note] WSREP: (1e4b81b8-bbba, 'tcp://0.0.0.0:4567') connection established to ab31edb0-ab39 tcp://100.115.157.230:4567
2020-08-04 15:28:16 0 [Note] WSREP: (1e4b81b8-bbba, 'tcp://0.0.0.0:4567') turning message relay requesting on, nonlive peers:
2020-08-04 15:28:17 0 [Note] WSREP: EVS version upgrade 0 -> 1
2020-08-04 15:28:17 0 [Note] WSREP: declaring ab31edb0-ab39 at tcp://100.115.157.230:4567 stable
2020-08-04 15:28:17 0 [Note] WSREP: PC protocol upgrade 0 -> 1
2020-08-04 15:28:17 0 [Note] WSREP: Node ab31edb0-ab39 state prim
2020-08-04 15:28:17 0 [Note] WSREP: view(view_id(PRIM,1e4b81b8-bbba,4) memb {
1e4b81b8-bbba,0
ab31edb0-ab39,0
} joined {
} left {
} partitioned {
})
2020-08-04 15:28:17 0 [Note] WSREP: save pc into disk
2020-08-04 15:28:17 0 [Note] WSREP: gcomm: connected
2020-08-04 15:28:17 0 [Note] WSREP: Changing maximum packet size to 64500, resulting msg size: 32636
2020-08-04 15:28:17 0 [Note] WSREP: Shifting CLOSED -> OPEN (TO: 0)
2020-08-04 15:28:17 0 [Note] WSREP: Opened channel 'galera'
2020-08-04 15:28:17 0 [Note] WSREP: New COMPONENT: primary yes, bootstrap no, my_idx 0, memb_num 2
2020-08-04 15:28:17 0 [Note] WSREP: STATE_EXCHANGE: sent state UUID: 1ee7e32f-d667-11ea-9e2f-36ab219db152
2020-08-04 15:28:17 1 [Note] WSREP: Starting rollbacker thread 1
2020-08-04 15:28:17 2 [Note] WSREP: Starting applier thread 2
2020-08-04 15:28:17 0 [Note] WSREP: STATE EXCHANGE: sent state msg: 1ee7e32f-d667-11ea-9e2f-36ab219db152
2020-08-04 15:28:17 0 [Note] WSREP: STATE EXCHANGE: got state msg: 1ee7e32f-d667-11ea-9e2f-36ab219db152 from 0 (stg-mariadb-galera-1)
2020-08-04 15:28:17 0 [Note] WSREP: STATE EXCHANGE: got state msg: 1ee7e32f-d667-11ea-9e2f-36ab219db152 from 1 (stg-mariadb-galera-0)
2020-08-04 15:28:17 0 [Note] WSREP: Quorum results:
version 6,
component PRIMARY,
conf_id 3,
members 1/2 (joined/total),
act_id 1732,
last_appl. 1729,
protocols 2/10/4 (gcs/repl/appl),
vote policy 0,
group UUID 6cf9ed1b-d54b-11ea-8cf7-5fde2825032b
2020-08-04 15:28:17 0 [Note] WSREP: Flow-control interval: [23, 23]
2020-08-04 15:28:17 0 [Note] WSREP: Shifting OPEN -> PRIMARY (TO: 1733)
2020-08-04 15:28:17 2 [Note] WSREP: ####### processing CC 1733, local, ordered
2020-08-04 15:28:17 2 [Note] WSREP: Process first view: 6cf9ed1b-d54b-11ea-8cf7-5fde2825032b my uuid: 1e4b81b8-d667-11ea-bbba-a6360c66a8a4
2020-08-04 15:28:17 2 [Note] WSREP: Server stg-mariadb-galera-1 connected to cluster at position 6cf9ed1b-d54b-11ea-8cf7-5fde2825032b:1733 with ID 1e4b81b8-d667-11ea-bbba-a6360c66a8a4
2020-08-04 15:28:17 2 [Note] WSREP: Server status change disconnected -> connected
2020-08-04 15:28:17 2 [Note] WSREP: wsrep_notify_cmd is not defined, skipping notification.
2020-08-04 15:28:17 2 [Note] WSREP: ####### My UUID: 1e4b81b8-d667-11ea-bbba-a6360c66a8a4
2020-08-04 15:28:17 2 [Note] WSREP: Cert index reset to 00000000-0000-0000-0000-000000000000:-1 (proto: 10), state transfer needed: yes
2020-08-04 15:28:17 0 [Note] WSREP: Service thread queue flushed.
2020-08-04 15:28:17 2 [Note] WSREP: ####### Assign initial position for certification: 00000000-0000-0000-0000-000000000000:-1, protocol version: -1
2020-08-04 15:28:17 2 [Note] WSREP: State transfer required:
Group state: 6cf9ed1b-d54b-11ea-8cf7-5fde2825032b:1733
Local state: 6cf9ed1b-d54b-11ea-8cf7-5fde2825032b:1731
2020-08-04 15:28:17 2 [Note] WSREP: Server status change connected -> joiner
2020-08-04 15:28:17 2 [Note] WSREP: wsrep_notify_cmd is not defined, skipping notification.
2020-08-04 15:28:17 0 [Note] WSREP: Joiner monitor thread started to monitor
2020-08-04 15:28:17 0 [Note] WSREP: Running: 'wsrep_sst_mariabackup --role 'joiner' --address '100.124.161.47' --datadir '/bitnami/mariadb/data/' --defaults-file '/opt/bitnami/mariadb/conf/my.cnf' --parent '1' --binlog 'mysql-bin' --mysqld-args --defaults-file/opt/bitnami/mariadb/conf/my.cnf --basedir/opt/bitnami/mariadb --datadir/bitnami/mariadb/data --socket/opt/bitnami/mariadb/tmp/mysql.sock --pid-file/opt/bitnami/mariadb/tmp/mysqld.pid --wsrep_node_namestg-mariadb-galera-1 --wsrep_node_address100.124.161.47 --wsrep_cluster_namegalera --wsrep_cluster_addressgcomm://stg-mariadb-galera-headless.stg.svc.cluster.local --wsrep_sst_methodmariabackup --wsrep_sst_authmariabackup:TRHCIIKBdlyLg2vMrSqDWE@@'
WSREP_SST: [INFO] Streaming with xbstream (20200804 15:28:18.119)
WSREP_SST: [INFO] Using socat as streamer (20200804 15:28:18.124)
WSREP_SST: [INFO] Evaluating timeout -k 110 100 socat -u TCP-LISTEN:4444,reuseaddr stdio | mbstream -x; RC( ${PIPESTATUS[@]} ) (20200804 15:28:18.219)
2020-08-04 15:28:18 2 [Note] WSREP: Prepared SST request: mariabackup|100.124.161.47:4444/xtrabackup_sst//1
2020-08-04 15:28:18 2 [Note] WSREP: ####### IST uuid:6cf9ed1b-d54b-11ea-8cf7-5fde2825032b f: 1732, l: 1733, STRv: 3
2020-08-04 15:28:18 2 [Note] WSREP: IST receiver addr using tcp://100.124.161.47:4568
2020-08-04 15:28:18 2 [Note] WSREP: Prepared IST receiver for 1732-1733, listening at: tcp://100.124.161.47:4568
2020-08-04 15:28:18 0 [Note] WSREP: Member 0.0 (stg-mariadb-galera-1) requested state transfer from '*any*'. Selected 1.0 (stg-mariadb-galera-0)(SYNCED) as donor.
2020-08-04 15:28:18 0 [Note] WSREP: Shifting PRIMARY -> JOINER (TO: 1733)
2020-08-04 15:28:18 2 [Note] WSREP: Requesting state transfer: success, donor: 1
2020-08-04 15:28:19 0 [Note] WSREP: 1.0 (stg-mariadb-galera-0): State transfer to 0.0 (stg-mariadb-galera-1) complete.
2020-08-04 15:28:19 0 [Note] WSREP: Member 1.0 (stg-mariadb-galera-0) synced with group.
WSREP_SST: [INFO] xtrabackup_ist received from donor: Running IST (20200804 15:28:19.374)
WSREP_SST: [INFO] Galera co-ords from recovery: 6cf9ed1b-d54b-11ea-8cf7-5fde2825032b:1731 0 (20200804 15:28:19.413)
WSREP_SST: [INFO] Total time on joiner: 0 seconds (20200804 15:28:19.432)
WSREP_SST: [INFO] Removing the sst_in_progress file (20200804 15:28:19.443)
2020-08-04 15:28:19 3 [Note] WSREP: SST received
2020-08-04 15:28:19 3 [Note] WSREP: Server status change joiner -> initializing
2020-08-04 15:28:19 3 [Note] WSREP: wsrep_notify_cmd is not defined, skipping notification.
2020-08-04 15:28:19 0 [Warning] The parameter innodb_file_format is deprecated and has no effect. It may be removed in future releases. See https://mariadb.com/kb/en/library/xtradbinnodb-file-format/
2020-08-04 15:28:19 0 [Note] InnoDB: Using Linux native AIO
2020-08-04 15:28:19 0 [Note] InnoDB: Mutexes and rw_locks use GCC atomic builtins
2020-08-04 15:28:19 0 [Note] InnoDB: Uses event mutexes
2020-08-04 15:28:19 0 [Note] InnoDB: Compressed tables use zlib 1.2.11
2020-08-04 15:28:19 0 [Note] InnoDB: Number of pools: 1
2020-08-04 15:28:19 0 [Note] InnoDB: Using SSE2 crc32 instructions
2020-08-04 15:28:19 0 [Note] mysqld: O_TMPFILE is not supported on /opt/bitnami/mariadb/tmp (disabling future attempts)
2020-08-04 15:28:19 0 [Note] InnoDB: Initializing buffer pool, total size 2G, instances 8, chunk size 128M
2020-08-04 15:28:19 0 [Note] InnoDB: Completed initialization of buffer pool
2020-08-04 15:28:19 0 [Note] InnoDB: If the mysqld execution user is authorized, page cleaner thread priority can be changed. See the man page of setpriority().
2020-08-04 15:28:19 0 [Note] InnoDB: Setting log file ./ib_logfile101 size to 134217728 bytes
2020-08-04 15:28:19 0 [Note] InnoDB: Setting log file ./ib_logfile1 size to 134217728 bytes
2020-08-04 15:28:19 0 [Note] InnoDB: Renaming log file ./ib_logfile101 to ./ib_logfile0
2020-08-04 15:28:19 0 [Note] InnoDB: New log files created, LSN181125986
2020-08-04 15:28:20 0 [Note] WSREP: (1e4b81b8-bbba, 'tcp://0.0.0.0:4567') turning message relay requesting off
2020-08-04 15:28:20 0 [Note] InnoDB: 128 out of 128 rollback segments are active.
2020-08-04 15:28:20 0 [Note] InnoDB: Creating shared tablespace for temporary tables
2020-08-04 15:28:20 0 [Note] InnoDB: Setting file './ibtmp1' size to 12 MB. Physically writing the file full; Please wait ...
2020-08-04 15:28:20 0 [Note] InnoDB: File './ibtmp1' size is now 12 MB.
2020-08-04 15:28:20 0 [Note] InnoDB: Waiting for purge to start
2020-08-04 15:28:20 0 [Note] InnoDB: 10.4.13 started; log sequence number 181126156; transaction id 2784
2020-08-04 15:28:20 0 [Note] InnoDB: Loading buffer pool(s) from /bitnami/mariadb/data/ib_buffer_pool
2020-08-04 15:28:20 0 [Note] Plugin 'FEEDBACK' is disabled.
/opt/bitnami/mariadb/sbin/mysqld, Version: 10.4.13-MariaDB-log (Source distribution). started with:
Tcp port: 3306 Unix socket: /opt/bitnami/mariadb/tmp/mysql.sock
Time Id Command Argument
2020-08-04 15:28:20 0 [Note] Server socket created on IP: '0.0.0.0'.
2020-08-04 15:28:20 0 [Note] WSREP: wsrep_init_schema_and_SR (nil)
2020-08-04 15:28:20 0 [Note] WSREP: Server initialized
2020-08-04 15:28:20 0 [Note] WSREP: Server status change initializing -> initialized
2020-08-04 15:28:20 0 [Note] WSREP: wsrep_notify_cmd is not defined, skipping notification.
2020-08-04 15:28:20 3 [Note] WSREP: Server status change initialized -> joined
2020-08-04 15:28:20 3 [Note] WSREP: wsrep_notify_cmd is not defined, skipping notification.
2020-08-04 15:28:20 3 [Note] WSREP: Recovered position from storage: 6cf9ed1b-d54b-11ea-8cf7-5fde2825032b:1731
2020-08-04 15:28:20 3 [Note] WSREP: Recovered view from SST:
id: 6cf9ed1b-d54b-11ea-8cf7-5fde2825032b:1731
status: primary
protocol_version: 4
capabilities: MULTI-MASTER, CERTIFICATION, PARALLEL_APPLYING, REPLAY, ISOLATION, PAUSE, CAUSAL_READ, INCREMENTAL_WS, UNORDERED, PREORDERED, STREAMING, NBO
final: no
own_index: -1
members(2):
0: ab31edb0-d666-11ea-ab39-de43af719e08, stg-mariadb-galera-0
1: c788c5e8-d666-11ea-bca6-7ecd4cb5e9aa, stg-mariadb-galera-1
2020-08-04 15:28:20 3 [Note] WSREP: wsrep_notify_cmd is not defined, skipping notification.
2020-08-04 15:28:20 11 [Note] WSREP: Starting applier thread 11
2020-08-04 15:28:20 12 [Note] WSREP: Recovered cluster id 6cf9ed1b-d54b-11ea-8cf7-5fde2825032b
2020-08-04 15:28:20 13 [Note] WSREP: Starting applier thread 13
2020-08-04 15:28:20 0 [Note] Reading of all Master_info entries succeeded
2020-08-04 15:28:20 0 [Note] Added new Master_info '' to hash table
2020-08-04 15:28:20 0 [Note] /opt/bitnami/mariadb/sbin/mysqld: ready for connections.
Version: '10.4.13-MariaDB-log' socket: '/opt/bitnami/mariadb/tmp/mysql.sock' port: 3306 Source distribution
2020-08-04 15:28:20 15 [Note] WSREP: Starting applier thread 15
2020-08-04 15:28:20 3 [Note] WSREP: SST received: 6cf9ed1b-d54b-11ea-8cf7-5fde2825032b:1731
2020-08-04 15:28:20 2 [Note] WSREP: Installed new state from SST: 6cf9ed1b-d54b-11ea-8cf7-5fde2825032b:1731
2020-08-04 15:28:20 0 [Note] WSREP: Joiner monitor thread ended with total time 3 sec
2020-08-04 15:28:20 2 [Note] WSREP: Receiving IST: 2 writesets, seqnos 1732-1733
2020-08-04 15:28:20 0 [Note] WSREP: ####### IST applying starts with 1732
2020-08-04 15:28:20 0 [Note] WSREP: ####### IST current seqno initialized to 1732
2020-08-04 15:28:20 0 [Note] WSREP: Receiving IST... 0.0% (0/2 events) complete.
2020-08-04 15:28:20 0 [Note] WSREP: REPL Protocols: 10 (5)
2020-08-04 15:28:20 0 [Note] WSREP: Service thread queue flushed.
2020-08-04 15:28:20 0 [Note] WSREP: ####### Assign initial position for certification: 6cf9ed1b-d54b-11ea-8cf7-5fde2825032b:1731, protocol version: 5
2020-08-04 15:28:20 0 [Note] WSREP: REPL Protocols: 10 (5)
2020-08-04 15:28:20 0 [Note] WSREP: ####### Adjusting cert position: 1731 -> 1732
2020-08-04 15:28:20 0 [Note] WSREP: Service thread queue flushed.
2020-08-04 15:28:20 0 [Note] WSREP: Lowest cert index boundary for CC from ist: 1732
2020-08-04 15:28:20 0 [Note] WSREP: Min available from gcache for CC from ist: 1402
2020-08-04 15:28:20 0 [Note] WSREP: IST preload starting at 1733
2020-08-04 15:28:20 11 [Note] WSREP:
View:
id: 6cf9ed1b-d54b-11ea-8cf7-5fde2825032b:1732
status: primary
protocol_version: 4
capabilities: MULTI-MASTER, CERTIFICATION, PARALLEL_APPLYING, REPLAY, ISOLATION, PAUSE, CAUSAL_READ, INCREMENTAL_WS, UNORDERED, PREORDERED, STREAMING, NBO
final: no
own_index: -1
members(1):
0: ab31edb0-d666-11ea-ab39-de43af719e08, stg-mariadb-galera-0
2020-08-04 15:28:20 11 [Note] WSREP: wsrep_notify_cmd is not defined, skipping notification.
2020-08-04 15:28:20 0 [Note] WSREP: REPL Protocols: 10 (5)
2020-08-04 15:28:20 0 [Note] WSREP: ####### Adjusting cert position: 1732 -> 1733
2020-08-04 15:28:20 0 [Note] WSREP: Service thread queue flushed.
2020-08-04 15:28:20 0 [Note] WSREP: Lowest cert index boundary for CC from ist: 1733
2020-08-04 15:28:20 0 [Note] WSREP: Min available from gcache for CC from ist: 1402
2020-08-04 15:28:20 0 [Note] WSREP: Receiving IST...100.0% (2/2 events) complete.
2020-08-04 15:28:20 13 [Note] WSREP:
View:
id: 6cf9ed1b-d54b-11ea-8cf7-5fde2825032b:1733
status: primary
protocol_version: 4
capabilities: MULTI-MASTER, CERTIFICATION, PARALLEL_APPLYING, REPLAY, ISOLATION, PAUSE, CAUSAL_READ, INCREMENTAL_WS, UNORDERED, PREORDERED, STREAMING, NBO
final: no
own_index: 0
members(2):
0: 1e4b81b8-d667-11ea-bbba-a6360c66a8a4, stg-mariadb-galera-1
1: ab31edb0-d666-11ea-ab39-de43af719e08, stg-mariadb-galera-0
2020-08-04 15:28:20 13 [Note] WSREP: wsrep_notify_cmd is not defined, skipping notification.
2020-08-04 15:28:20 2 [Note] WSREP: Draining apply monitors after IST up to 1733
2020-08-04 15:28:20 2 [Note] WSREP: IST received: 6cf9ed1b-d54b-11ea-8cf7-5fde2825032b:1733
2020-08-04 15:28:20 2 [Note] WSREP: Lowest cert index boundary for CC from sst: 1733
2020-08-04 15:28:20 2 [Note] WSREP: Min available from gcache for CC from sst: 1402
2020-08-04 15:28:20 0 [Note] WSREP: 0.0 (stg-mariadb-galera-1): State transfer from 1.0 (stg-mariadb-galera-0) complete.
2020-08-04 15:28:20 0 [Note] WSREP: Shifting JOINER -> JOINED (TO: 1733)
2020-08-04 15:28:20 0 [Note] WSREP: Member 0.0 (stg-mariadb-galera-1) synced with group.
2020-08-04 15:28:20 0 [Note] WSREP: Shifting JOINED -> SYNCED (TO: 1733)
2020-08-04 15:28:20 2 [Note] WSREP: Server stg-mariadb-galera-1 synced with group
2020-08-04 15:28:20 2 [Note] WSREP: Server status change joined -> synced
2020-08-04 15:28:20 2 [Note] WSREP: Synchronized with group, ready for connections
2020-08-04 15:28:20 2 [Note] WSREP: wsrep_notify_cmd is not defined, skipping notification.
2020-08-04 15:28:20 17 [Warning] Access denied for user 'root'@'127.0.0.1' (using password: YES)
2020-08-04 15:28:21 18 [Warning] Access denied for user 'root'@'127.0.0.1' (using password: YES)
2020-08-04 15:28:30 19 [Warning] Access denied for user 'root'@'127.0.0.1' (using password: YES)
2020-08-04 15:28:31 20 [Warning] Access denied for user 'root'@'127.0.0.1' (using password: YES)
2020-08-04 15:28:31 0 [Note] InnoDB: Buffer pool(s) load completed at 200804 15:28:31
2020-08-04 15:28:40 21 [Warning] Access denied for user 'root'@'127.0.0.1' (using password: YES)
2020-08-04 15:28:41 22 [Warning] Access denied for user 'root'@'127.0.0.1' (using password: YES)
2020-08-04 15:28:46 23 [Warning] Access denied for user 'root'@'localhost' (using password: YES)
2020-08-04 15:28:50 24 [Warning] Access denied for user 'root'@'127.0.0.1' (using password: YES)
2020-08-04 15:28:51 25 [Warning] Access denied for user 'root'@'127.0.0.1' (using password: YES)
2020-08-04 15:28:56 26 [Warning] Access denied for user 'root'@'localhost' (using password: YES)
2020-08-04 15:29:00 27 [Warning] Access denied for user 'root'@'127.0.0.1' (using password: YES)
2020-08-04 15:29:01 28 [Warning] Access denied for user 'root'@'127.0.0.1' (using password: YES)
2020-08-04 15:29:06 29 [Warning] Access denied for user 'root'@'127.0.0.1' (using password: YES)
2020-08-04 15:29:06 30 [Warning] Access denied for user 'root'@'localhost' (using password: YES)
2020-08-04 15:29:10 31 [Warning] Access denied for user 'root'@'127.0.0.1' (using password: YES)
2020-08-04 15:29:11 32 [Warning] Access denied for user 'root'@'127.0.0.1' (using password: YES)
2020-08-04 15:29:16 33 [Warning] Access denied for user 'root'@'localhost' (using password: YES)
Deleting the pvc of that pod and then deleting the pods solved the proble. I would be very happy to understand why this is happening
Hi, can you provide more information about your use case?
--set
flag?When the chart is installed the first time without setting a password, a random password is generated and stored using secrets in the PVC. If you are upgrading an existing deployment, you need to specify the password values, if not it is going to use the previous one that it's not going to match.
In the same way if it is a new installation but there were a previous one with the same name that is going to use the same PVC, it's possible that the same issue happens because helm delete
doesn't remove the PVCs, you need to manually remove them with kubectl delete pvc ...
@carrodher Chart version mariadb-galera-3.1.3 All defaults except for passwords and these : galera.forceSafeToBootstrap: true persistence.storageClass: "rook-ceph-block" extraEnvVars:
I was not able to reproduce the issue using this configuration, I think the issue is caused because of the random password that needs to be removed or reused in the next deployments:
When the chart is installed the first time without setting a password, a random password is generated and stored using secrets in the PVC. If you are upgrading an existing deployment, you need to specify the password values, if not it is going to use the previous one that it's not going to match.
In the same way if it is a new installation but there was a previous one with the same name that is going to use the same PVC, it's possible that the same issue happens because
helm delete
doesn't remove the PVCs, you need to manually remove them withkubectl delete pvc ...
Description Follow this doc mariadb-galera to install mariadb-galera on kubernetes.
But mariadb-galera-0 pod always failed.
Check pod log, the message was:
Then, enter into pod, exec command
mysql -u root -p$MARIADB_ROOT_PASSWORD
, got message'Access denied for user 'root'@'localhost' (using password: YES)'
too, and then I tried commandmysql
, login mariadb success.Here is my values.yaml
Steps to reproduce the issue:
helm install --name mariadb-galera -f values.yaml bitnami/mariadb-galera
Describe the results you received: Here is pod log
Describe the results you expected: I expected the three mariadb-galera pod running.
Version
docker version
:docker info
:Additional environment details (AWS, VirtualBox, Docker for MAC, physical, etc.): Kubernetes version: v1.16.3