Closed Ryonez closed 6 years ago
your conclusion may be particular to you as 2 team members updated containers with live data , 1 before the version was made public and the other after.
Both times there were no issues and the pre-release test was with multiple databases, the post-release was on the droplet that houses our old forum (admittedly the forum is now dormant, but it was nonetheless live data)
More information is required here, what particular database you had problems with would be a good start.
More information is required here, what particular database you had problems with would be a good start.
Not sure how to answer this really. I gutted the database files for the services I used. I think. So the keycloak, nextcloud as teamspeak db folders.
It took me clearing the appdata folder for the docker and triggering the image to build a new db for it to start it successfully. Which means I don't get the data I need across.
How do I get at the logs for this? I'm convinced the issue is something the image is doing, I just need to know where it's failing. The logs in the log folder aren't really human readable.
you set the logs in the /config/custom.cnf file.
It seems that enabling general_log does nothing. All I have are the bin files, and they aren't exactly easy to read.
uncomment the following lines in the file
general_log_file = /config/log/mysql/mysql.log general_log = 1
and i have log output to that location
sparklyballs@Docker-Slave0:~/mariadb-test/log/mysql$ cat mysql.log /usr/sbin/mysqld, Version: 10.3.9-MariaDB-1:10.3.9+maria~bionic-log (mariadb.org binary distribution). started with: Tcp port: 3306 Unix socket: /var/run/mysqld/mysqld.sock Time Id Command Argument
Done:
And missing :(
did you restart the container after editing the file ?
I shut it down while editing, I'll restart it just in case.
No dice, and no log file still.
Question, have you tried adding innodb_force_recovery = 6
to the config to see if that fixes this issue?
You need to have the correct permissions on the custom.cnf file also, or else it doesnt load it. You edited it as root, so you need to change it back to nobody:users
What does it need to be? Currently:
0640 or 0644
can't remember exactly which of those 2
but it would normally say something like , unable to load world readable file if it isn't correct.
I think it's already on 644, isn't it? I wonder, I'll clear the databases and see if it loads the config.
It threw errors with just the .cnf left.
I cleaned out the appdata folder completely, and edited the new .cnf it made. It successfully created mysql.log after that.
Any chance there's a log for the daemon somewhere that might show the failure?
Help! I updated my docker image today to 137 from 130 or something, and have this issue as well. I cannot start my nextcloud anymore :(
What does clear appdata mean? Have I lost all my data? The startup is stuck in:
180916 12:47:35 mysqld_safe Logging to syslog. 180916 12:47:35 mysqld_safe Starting mysqld daemon with databases from /config/databases
I havent change the docker container. Everything worked before :(
@godfuture Do you have backups?
If so, can you restore them and see if you can update to version 135 without issue?
Well, I do, but its from April. So I guess it might be useful to test the successful upgrade to 135, but not to recover my data. Anyhow, I will create another container and test the upgrade to 135.
Why do you think the data got corrupted?
Well, read the initial post I made >.< If you don't restore the data it'll appear to fail I feel.
I would like to see what happens if you restore the data and upgrade from 130 to 135. If that works, then dies when you try to go to 136, then we know that 136 is the failure point, which is what this issue is about "linuxserver/mariadb:136 will cause complete data lose if you attempt to update to it".
Looking at this after I finally crawl out of bed and read it properly, I realised you said your backup is from April.
I'm sorry to say, I've been unable to recover the databases myself. If there's a way, I do not know what it is.
If we want to get further, we need to know what happens. As @Ryonez already mention in https://github.com/linuxserver/docker-mariadb/issues/22#issuecomment-421168474, what does @sparklyballs and others recommend to do to increase the log accuracy?
the log is enabled by uncommenting the lines i have mentioned in an earlier comment here.
i'm not in control of what is in or not in the log however.
@godfuture
We can still find out what version fails for you by you updating to each version one by one.
I have yet to try enabling the log from the last working version and backup to see if the log will work once it goes tits up.
If you're worried about what remains of the data that is currently unable to load, back that up so you have it available. Or spin up another copy of the container that is rolled back and had the backup db restored.
@Ryonez I got your point. You want to know when this issue appeared and compare the changes done in the docker file. But even you know the changes, I still would want to know how I can fix my current release. This will help also others that already updated their image.
@sparklyballs
the log is enabled by uncommenting the lines i have mentioned in an earlier comment here.
i'm not in control of what is in or not in the log however.
I understand. But as @Ryonez wrote in https://github.com/linuxserver/docker-mariadb/issues/22#issuecomment-421156681, this still isn't a game changer. I guess he meant that there are no log files since upgrade. Or did I misunderstand his answer? Considering I am right, what could help to bring back the log files? Did we check the wrong location? I include me here, because I unsuccessfully searched for the logs, too.
So if we have a way to retrieve the logs, we are able to determine the damage that has been done on the container and our data.
Hi all,
I ran into the same issue last week, upgrading to 136. I'm only getting a chance to investigate now and have come across the issue raised here.
I had two instances of MariaDB running on the same Unraid server (6.5.3). One was installed running the unaltered docker image. This upgraded fine. The second was specifically setup for Nextcloud and had binlogging configured as required for Nextcloud.
Upgraded in the following order: 1) Shutdown Mariadb 2) Clicked Upgrade option on Mariadb-Nextcloud while still running (process should cleanly shutdown, upgrade and restart process) 3) Clicked Upgrade on Mariadb (from (1) above) 4) Started Mariadb.
The regular Mariadb started up fine, but Mariadb-nextcloud instance failed with the repeating messages, as mentioned above:
180917 17:13:54 mysqld_safe Logging to syslog. 180917 17:13:54 mysqld_safe Starting mysqld daemon with databases from /config/databases
I've tried enabling the general_log (disabled by default), however the file does not get created, so I presume it's failing prior to creating the log.
The last backup I have is too old to use, and as this instance is only used for Nextcloud, if I can't recover it I could get Nextcloud to rescan everything and populate a new database.
the location of the log is /config/log/mysql/mysql.log, assuming you haven't changed the location in the two lines to uncomment and the file is set 0644 with chmod
it doesn't show much though, was my point about having control over what is in or not in it.
so nextcloud is a common theme here.
perhaps checking the logs for nextcloud may give more information...
The nextcloud logs don't show much, it was shutdown prior to upgrading Mariadb and when started up afterwards failed to connect to Mariadb, this was when I realised there was a problem.
It repeats the following:
PHP Fatal error: Uncaught Doctrine\DBAL\DBALException: Failed to connect to the database: An exception occured in driver: SQLSTATE[HY000] [2006] MySQL server has gone away in /config/www/nextcloud/lib/private/DB/Connection.php:64
Stack trace:
noone in the team is able to replicate this , updates to our servers and personal instances within the team have all passed without any issues including at least 1 with nextcloud.
Same story here. I just know that I had to "tweak" my database for nextcloud. For example I had to migrate to utf8mb4. Maybe there are incompatibilities of the NC requirements and the latest image release?
Could we somehow start mysql in commandline with standard output?
Could we somehow start mysql in commandline with standard output?
not that i'm aware of.
I had also updated to utf8mb4 a while ago.
I had also updated to utf8mb4 a while ago.
via custom.cnf ?
That's the same guide I followed to change the custom.cnf and migrate any tables as required.
I'm using the Nextcloud News app which was displaying errors about requiring utf8mb4.
The migration is experimental, not utf8mb4 support (https://mariadb.com/kb/en/library/unicode/).
As it worked before, what might have changed in this corner?
to me experimental , and "even more experimental" means things are likely to break.
I'm using the Nextcloud News app which was displaying errors about requiring utf8mb4.
Exactly same applies to me. We could decide between failure operation or this migration....here we are.
The news app is one of the most famous apps in the Nextcloud app store. I guess this ticket will gain broader publicity sooner or later...
But still we don't know what is the real cause here. Could someone test the update to 136 with the mb4 being active or tables migrated according NC guide? Then we know for sure....
in that guide, just following the first step breaks mariadb latest pull.
[mysqld] innodb_large_prefix=true innodb_file_format=barracuda innodb_file_per_table=1
adding that to the custom.cnf breaks the container.
and taking them out again, the container works...
Just tried removing those and the problem remains. The logs repeat the same messages mentioned above.
i'm sorry but using a guide that is prefixed "even more experimental" as i've already said means, shit will break.....
@sparklyballs
in that guide, just following the first step breaks mariadb latest pull.
[mysqld] innodb_large_prefix=true innodb_file_format=barracuda innodb_file_per_table=1
adding that to the custom.cnf breaks the container.
Does this mean [mysqld] innodb_large_prefix=true innodb_file_format=barracuda innodb_file_per_table=1
is not supported anymore?
Highlighting that experimental changes might cause damage, is wise. But I guess we simply strive for constructive help now.
Considering I don't know what changed from 135 to 136, we should inform Nextcloud that this guide is incompatible with latest linuxserverio mariadb image!
i'm not sure how much traction that would get, they most likely will reiterate what i have said about experimental features etc....
i would try taking the offending lines out , reverting to 135 by adding :135 to the linuxserver/mariadb section of the run command and adding
innodb_force_recovery = 6
to the mysqld section of the custom.cnf file
That guide is no longer present for versions 12, 13 & 14.
@sparklyballs
i'm not sure how much traction that would get, they most likely will reiterate what i have said about experimental features etc....
i would try taking the offending lines out , reverting to 135 by adding :135 to the linuxserver/mariadb section of the run command and adding
innodb_force_recovery = 6
to the mysqld section of the custom.cnf file
I did the following in .mariadb/config/custom.cnf:
innodb_large_prefix = true
innodb_file_format = barracuda
innodb_file_per_table = 1
...and my NC instance started again with 136!
Though I am not sure what this change means to utf8mb4 support and the NC news app...
@CHBMB I think the path just changed from 12 onwards. https://docs.nextcloud.com/server/14/admin_manual/configuration_database/mysql_4byte_support.html
<--- Host OS --> unRaid 6.5.3
<--- Command line users, your run/create command, GUI/Unraid users, a screenshot of your template settings. -->
Alright, after testing... The linuxserver/mariadb version 136 corrupts a database if you're updating, in my case from version 135. The converted version 136 database cannot be taken back to linuxserver/mariadb:135, it'll fail to load. Backups of linuxserver/mariadb:135 databases will be corrupted on load. Running a clean linuxserver/mariadb:136 is fine, it seems to create and run a new database fine.
My current conclusion, linuxserver/mariadb:136 will cause complete data lose if you attempt to update to it. It should be avoided if you currently use linuxserver/mariadb:135