Closed photm5 closed 4 years ago
Thanks for the contribution; I will review this in detail once I get some spare time on my hands.
Just a heads up: priority #1 is 100% backwards compatibility.
On 2019-09-24 05:09 -0700, Andreas Meingast wrote:
Just a heads up: priority #1 is 100% backwards compatibility.
I wonder if it is at all possible to change types while staying
backwards compatible? Someone who imported SchemaMigration(..)
will
expect a schemaMigrationExecutedAt
function which returns a
LocalTime
, he even expects it to come with SchemaMigration
. So we
need to keep around the entire old record type forever.
Even more: Say someone uses the result of getMigrations
in such a way
that SchemaMigration
is never mentioned, he just uses the fact that it
implements Eq
. Because he doesn't explicitly specify a type, the type
signature of getMigrations
must keep doing so. This means we cannot
use type classes to decide wether to return the old or the new record
type. So we need to keep around the getMigrations
function forever.
(Side note: This doesn't only happen when you change the return type of a function, it also happens when you change the type of an argument. Some users might not specify the type, so it must stay concrete, some users might specify the type, so it must stay the same concrete type. Hence, you can never change any data type if you want to stay 100% backwards compatible.)
I have two different ideas to improve backwards compatibility:
Use ZonedTime
(LocalTime + TimeZone) instead of UTCTime
. This
would enable people to convert to LocalTime if they want the old
thing. However a) this would cause breakage for anyone who uses the
timestamp, b) ZonedTime has no Eq instance, so people who compare
timestamps for equality would have an extra annoyance to deal with.
Add the time zone as an additional attribute to SchemaMigration
.
This would most likely cause no breakage even for those who do use the
timestamp. However, it's not 100% backwards compatible because the
SchemaMigration
constructor is exported, which would then take 4
instead of 3 arguments. (This will cause breakage when people
serialize/deserialize SchemaMigration
s by hand.)
Both would be better than using UTCTime
, because you cannot even get
the LocalTime
back from the UTCTime
. I think the second one is
definitely superior because it will cause breakage to less people.
Edit: Actually, the second one breaks pattern matching too, so it can even affect people who don't want the timestamp at all.
I have another idea for increased backwards compatibility (as hinted at before, but I didn't like it until now): Make the return type polymorphic. In order to avoid duplicating the record type, simply parameterize it by the timestamp type.
I don't have time for this right now, but I'll try to update the PR in about 2 weeks from now (i.e. 2019-10-24).
The proposed polymorphic return type kind-of works already, as can be
seen on the polymorphic-return
branch (currently at
8879b374c2131c08639203ccefe056b3bdd73b85) at my repo.
However, if getMigrations
is called in such a way that the timestamp
type cannot be inferred, it is left arbitrary. This is a problem
because we then need a FromField instance for an arbitrary type. My
current approach is to have the FieldParser
return undefined – which
should be fine because the caller of getMigrations
doesn't use the
timestamp in this case – but somehow it is evaluated, leading to a crash
(there's a test that covers this case).
I hope to continue in about 3 weeks from now (i.e. circa 2019-11-14).
On 2019-10-25 14:46 +0200, I wrote:
I hope to continue in about 3 weeks from now (i.e. circa 2019-11-14).
The display of my laptop broke (I can still use it with an external monitor, but this means I can only use it at home). This also has low priority and I don't have much hopes for a beautiful outcome, so we'll see...
Closing this because:
Thanks for bearing with me though :)
Fixes #9.