Closed clarkwinkelmann closed 2 years ago
I would prefer to:
Other than that it seems fine, but I'm worried about the consequence on support. Will people understand, if things actually start to break we'll probably need to amend this code for simplicity again.
@luceos do you want to comment on the changes before I merge? I'll try to do the merge and publish this next evening.
I notice the Composer requirements were bumped to Flarum 1.3.1 but I don't see any technical reason for it. I think I'll move it back to 1.2.0 since it works fine on that Flarum version as well.
@clarkwinkelmann my apologies for the delay. This is looking fine to me. Yes please lower the restriction, it's become an habit of mine to push it up no matter what has been modified.
I think we should then tag this as 2.0.0-beta.2 and give it a spin on our platform for testing.
@clarkwinkelmann @luceos the Flarum/core requirement was bumped in this instance due to the default settings extender not registering defaults correctly in previous versions. See https://github.com/flarum/framework/pull/3439
For that reason, I'd suggest leaving the core requirement at 1.3.1
Changes proposed in this pull request: This PR adds some output to the command in order to get some progress and timing information out of the running job.
It also makes the following performance changes:
SELECT
required columns in the SQL query, saves a few seconds on my datasetThe
SELECT
improvement and chunk sizes are controlled by a new "risky performance improvements" checkbox in the admin panel, because they might conflict with some extensions. I didn't experience any conflict during my tests but as explained in the source code comments, this will break with custom slug drivers that depend on other columns and it could also break with some visibility scopes if they do some fancy raw SQL stuff in there.I have done tests on a dataset with close to a million discussions and 150k users.
Reviewers should focus on: This has been well tested but maybe we want to try it out on Blomstra with the larger dataset before including it in the official release?
Confirmed