Closed ErikBasalt closed 4 years ago
Hi Erik.
I've already considered such a system for the next actual release, just checking if I need an extra table for that or if there's a roundcube table I can use.
My idea is to check for the exact same thing you described, but check it automatically whenever a user logs in, just like the presets.
Kind regards, Benjamin
Am 2014-12-03 23:34, schrieb Erik:
Would it be possible to start adding SQL patch scripts to dbinit folder, and record the applied patches in a database table? MyRoundcube has a nice solution for that:
- original database SQL file is named: pgsql.initial.sql
- patch files are named: pgsql..sql
- apart from the patch itself, the files contain something like: UPDATE "system" SET value='initial|20130903' WHERE name='myrc_carddav';
- patch files must be applied in date order
Obviously, we don't want any "myrc" references, and the choice for the table name "system" is a little strange, but actually their solution is quite handy.
This way we even could apply the new SQL patches automatically when installing a plugin update (look which patch files are there now, which ones are already applied, do the new ones). I am already working on a shell script for that.
Best regards, Erik
Reply to this email directly or view it on GitHub [1].
Jabber: blindcoder@jabber.ccc.de Twitter: blind_coder Web: http://www.benjamin-schieder.de/
That would be great, looking forward to this new release ;-)
BR, Erik
If you want, I can send you the shell script plus some demo files that I made for MyRoundcube on a Synology NAS, could be helpful.
/Erik
Sure, that would be great.
Am 2014-12-07 21:57, schrieb Erik:
If you want, I can send you the shell script plus some demo files that I made for MyRoundcube on a Synology NAS, could be helpful.
/Erik
Reply to this email directly or view it on GitHub [1].
Jabber: blindcoder@jabber.ccc.de Twitter: blind_coder Web: http://www.benjamin-schieder.de/
[1] https://github.com/blind-coder/rcmcarddav/issues/65#issuecomment-65954618
Sent you an e-mail with zip file
I have created a branch with the functionality, conveniently called auto_db_migrate. It contains a sample migration for mysql, postgres and sqlite, too, for testing.
Thanks for that, it looks promising. However, since only branch 51 works on my synology NAS, I cannot test it. I guess I have to wait until you integrated al this into the main line.
Branch 51 has been integrated into master before this branch has been created, so it should work just as well.
Ok, probably I made a mistake somehow. Now master, 51 and auto_db_migrate branches are working. I could not check if the database was actually updated by auto_db_migrate, but don't see any warnings/errors in the logs dir either.
One remark though: the auto_db_migrate version still needs the initial database to be setup manually, is this what you intended? It would be handy if it would be done automagically as well.
Yes, I actually intended it that way, because otherwise the query to check for already processed migrations would return an error which would then be returned back to the client.
As this should (should!) be the last time the initial database query must be done manually after an update, I don't think this is much of a problem.
Ehm, well, but this way, every new user still has to setup the database manually. Especially newbies do not know how to do that on their specific system. It could make their lifes easier if the plugin would do it for them.
BTW: I am working on an automatic installer package for your plugin on Synology NAS. Goal is to install the plugin at the right location, enable the plugin, and generate the database tables (if not done by the plugin !). The installer package will also support updates. If you decide to generate the initial database by the plugin anyway, please let me know, so I will leave that task out of the installer package.
I just discovered the logs/carddav file, it contains next line: [17-Dec-2014 20:20:02 +0100]: Unknown database backend: postgres
And I checked table "carddav_migrations", it is is empty (no rows), shouldn't it contain the 0000 migration?
I do use progreSQL indeed, and the the addressbook does show up correctly in Roundcube, so I guess it's just the migration that fails.
I just pushed another commit that should fix this problem.
If you have an idea or can point me to any documentation how to do an initial database setup, I'm all for doing it automatically. I just really want to avoid running "CREATE TABLE IF NOT EXISTS" and similiar on every login...
Ok, great, new version fixed the problem. One minor issue left: timestamp in carddav_migrations seems to use GMT instead of my timezone (GMT+1).
cat logs/carddav: [18-Dec-2014 20:05:47 +0100]: Processed migration: 0000-sample
roundcubemail=> SELECT * from carddav_migrations; id | filename | processed_at ----+-------------+---------------------------- 1 | 0000-sample | 2014-12-18 19:05:47.026471 (1 row)
About initial database setup: At startup, you could try "SELECT * from carddav_migrations;" if it fails then run the create SQL commands. Use "IF NOT EXISTS" to avoid problems with existing users (w/o carddav_migrations table yet) that upgrade to this new version. Maybe also add row with filename=dbinit after succesful table creation?
I think I have something usable now. The dbinit has been copied to dbmigrations/0000-dbinit and uses IF NOT EXISTS (as before) and should also correctly handle configured table prefixes now. It should also not destroy existing installations :-)
Hi Benjamin, Sorry for the late reply, I have been away & quite busy. This new version is not working for me, I don't understand how TABLE_PREFIX should work, removed it manually from dbmigrations\0000-dbinit\postgres.sql but still it cannot create the database automatically.
I see in logs/errors: [23-Jan-2015 21:06:21 +0100]: DB Error: [7] ERROR: relation "carddav_addressbooks" does not exist LINE 1: SELECT id,password FROM carddav_addressbooks WHERE user_id='... ^ (SQL Query: SELECT id,password FROM carddav_addressbooks WHERE user_id='2') in /volume1/@appstore/MailStation/roundcubemail/program/lib/Roundcube/rcube_db.php on line 467 (POST /mail/?_task=login?_task=login&_action=login) [23-Jan-2015 21:06:21 +0100]: DB Error: [7] ERROR: relation "carddav_addressbooks" does not exist LINE 1: SELECT * FROM carddav_addressbooks WHERE user_id='2' AND pre... ^ (SQL Query: SELECT * FROM carddav_addressbooks WHERE user_id='2' AND presetname is not null) in /volume1/@appstore/MailStation/roundcubemail/program/lib/Roundcube/rcube_db.php on line 467 (POST /mail/?_task=login?_task=login&_action=login) [23-Jan-2015 21:06:23 +0100]: DB Error: [7] ERROR: relation "carddav_addressbooks" does not exist LINE 1: UPDATE carddav_addressbooks SET authentication_scheme='diges... ^ (SQL Query: UPDATE carddav_addressbooks SET authentication_scheme='digest' WHERE id=NULL) in /volume1/@appstore/MailStation/roundcubemail/program/lib/Roundcube/rcube_db.php on line 467 (POST /mail/?_task=login?_task=login&_action=login) [23-Jan-2015 21:06:23 Europe/Amsterdam] PHP Fatal error: Using $this when not in object context in /volume1/@appstore/MailStation/roundcubemail/plugins/carddav/carddav_backend.php on line 2057
And in carddav.warn: [23-Jan-2015 21:06:23 +0100]: BACKEND: (update_addressbook) [7] ERROR: relation "carddav_addressbooks" does not exist LINE 1: UPDATE carddav_addressbooks SET authentication_scheme='diges... ^
On;y when I create the tables manually first, it works fine: the plugin will create the 2 rows in carddav_migrations and I get the contacts in Roundcube.
Best regards, Erik
I have the same issue as Erik in the previous comment, running dbinit manually (first removing the table prefix placeholders) fixes the problem. There are unfortunately no helpful messages in the log.
Versions RC 1.1.4 and rmccarddav 1.0
I should realliy unpublish the 1.0 tag...
Can you please try the procedure with the dev-master branch using composer?
Thanks for following up @blind-coder but by now I have a working installation (also see #153 ). If I have another opportunity I will test. Once installed (by going through some hoops), the 1.0 seems to work ok btw.
Closing this, the dbmigrations mechanism is included in the master branch.
Would it be possible to start adding SQL patch scripts to dbinit folder, and record the applied patches in a database table? MyRoundcube has a nice solution for that:
Obviously, we don't want any "myrc" references, and the choice for the table name "system" is a little strange, but actually their solution is quite handy.
This way we even could apply the new SQL patches automatically when installing a plugin update (look which patch files are there now, which ones are already applied, do the new ones). I am already working on a shell script for that.
Best regards, Erik