Closed tpitale closed 5 years ago
@lpil @samphilipd Can I get a review of this?
Thanks for the PR dude. Check the feedback and I'll make sure it gets merged in once everything is shipshape.
@samphilipd As an aside: I'm wondering if there are any other breaking changes that might go into 2.0. Is there a plan for what to do when the sequences runs out of id
values?
@tpitale Yup, it resets from 1 again. Unless the user has over 2 billion jobs queued, there won't be any problem.
@samphilipd Just let me know if you want me to change (or limit) the priority scale. The choice was fairly arbitrary.
Yeah LGTM. Would like to get a sanity check from @lpil if possible
I'll take a look at this when I get a chance over the next few days :)
@samphilipd @lpil How would you feel if I made this into 2 PRs, one for the migration that adds the column and index and has no downtime, and then the code which uses the column which will be Rihanna v2?
Another route we could go is a generator for the migration. People on the slack channel seem to prefer that pattern. Here's an example: https://github.com/ueberauth/guardian_db/blob/master/lib/mix/tasks/guardian_db.gen.migration.ex
While having two releases does make deployment easier it doesn't mean that users will experience no problems without explicit guidance. There is no guarantee that they will deploy the first version, they may skip directly to the second. An upgrade guide will still be required.
@lpil
An upgrade guide will still be required.
I have agreed to write an upgrade guide to the best of my ability. This is now a discussion about how to give users an easier to upgrade path, with fewer issues, with an upgrade guide.
We cannot guarantee that nobody will have downtime. If they do not read the guide, for example. At some point, their database management is on them, and any downtime as well.
We can guarantee no problems during upgrades, we would simply query the schema to infer the version and then use the appropriate queries from then on out.
@lpil It will require downtime. If they read the docs, we can help them to avoid errors.
We could remove the column default and create the index concurrently, which will not require downtime on this table.
Splitting the releases is just an opportunity for users to have an easier time upgrading. Also, just a suggestion.
For me the ideal solution would be to gracefully handle older versions of the schema (logging a warning or error to alert the user to the need to migrate).
One way we could do this is by having users enable a feature that requires a column in their config, leaving it off by default.
The Job
will be a bit messier, having to add/remove a column from queries everywhere, but it is doable.
I looked into doing this automatically, based on columns available, but I think it would require setting Config
(globally) from the JobDispatcher
which is not guaranteed to be a singleton. We could end up overwriting the config on initialize.
I can take either path and update this PR. Let me know which is preferred @samphilipd @lpil. Personally, I prefer the explicitness of the first option.
@tpitale I think the simplest solution is just to take the breaking change on the chin for 2.x.
Users know that 1.x -> 2.x might break something. As long as we make it clear at the top of the README that a migration is required, and give clear instructions on how to do it, I'm OK with backwards incompatible changes.
Okay then @samphilipd. I'll just add the upgrading_v2.md and link it from the changelog.
@samphilipd Do you think I should simplify the doc I added?
@tpitale Thanks for your patience dude, I've been snowed under recently. Can I ask you to please rebase this on master and we'll get it merged and released.
@samphilipd Rebase done, ready for final review/merge. Thanks!
@tpitale nice one, making a release.
@samphilipd If you'd like to leave this stuff in master for a tiny bit, I can test it in our app as a complete package for awhile, before a final 2.0 release. Up to you!
Sounds good @tpitale
This is an attempt to address concerns from #58. It builds off the work of @KushalP.
Resolves #56.