Open alexdrl opened 6 months ago
200k is actually tiny so I'm surprised it's slow. Were you upgrading from the command line? This is a one time thing anyway, so I don't see much advantage in optimizing here.
Could we maybe filter similar to this?:
The filter won't do anything because parent
is zero on all rows.
200k is actually tiny so I'm surprised it's slow. Were you upgrading from the command line? This is a one time thing anyway, so I don't see much advantage in optimizing here.
yes, I was updating from occ upgrade command as updating the app from the Web UI got stuck too, but I didn't see any additional logs on the CLI. It might by my Postgres performance as runs with a NFS backend (but the backend is a powerful NAS server with ZFS), so unless I try to re-run the SQL query, I don't know... For getting out the issue, I checked and as the parent column is already filled, I created the migration row on the oc_migrations table...
Actually I can run the raw SQL query again to rule out a random performance concern. previously it was stuck for 15 minutes already...
Yeah, 15 minutes is too much; something could be wrong with your db (possibly also causing other slowdown issues). It took me <5s on a similarly sized DB.
Yeah, 15 minutes is too much; something could be wrong with your db (possibly also causing other slowdown issues). It took me <5s on a similarly sized DB.
It keeps on running π
I remember that the update using the UI got a timeout, but maybe later the migration was kept in the background and eventually finished? adding this kind of filter could help with a migration retry in those cases π
I will try to get some pgbench run on the server (not a Postgres expert btw), just to see if there is something very odd there.
adding this kind of filter could help with a migration retry in those cases
Ah I didn't realise you retried the migration. That's a good point. It'll still scan everything though cause there's no index (needed) on parent.
I'm still not sure if this is worth the optimization though; it's a one-time thing. Let's see if someone else comes up with the same issue.
Definitely do check your DB. I just set all parent
to 0
and re-ran the update query; it takes ~1s for 40k for me (I'm running MariaDB though)
Subsequently
Definitely do check your DB. I just set all
parent
to0
and re-ran the update query; it takes ~1s for 40k for me (I'm running MariaDB though)Subsequently
100% agree in that it can be my Postgres performance, just raising my hand in case it can be a issue that could happen to somebody else...
Will try to get a pgbench done today and post the results for comparison. (not sure if that would be a meaningful metric though)
Describe the bug Update to memories 7.1.0 stuck on PostgresSQL caused by migration SQL. my database had 200k rows on oc_memories and 2000k on oc_filecache.
To Reproduce Update to version 7.1.0 was really slow on my machine (not sure if Postgres could be the one to blame), connecting to pgadmin I found this SQL was stuck in executing.
The content on the SQL migration is similar here
Could a join here be helpful for performance? Could we maybe filter similar to this?:
Or maybe that is a dumb idea because the whole UPDATE is transactional?
Screenshots If applicable, add screenshots to help explain your problem.
Platform:
Additional context Add any other context about the problem here.
PS: Thanks for the work on the memories app for nextcloud, I really really love it π