sidnt / proden

prog ram de sign n otes
0 stars 0 forks source link

why schema changes happen #17

Open sidnt opened 4 years ago

sidnt commented 4 years ago

schema is just an addressing scheme for our data. such that if we get any piece of our data, we can understand what references are available on that piece of data, and so that we can invoke only those operations, whose arguments are well defined and present in the data. (avoiding npe, etc...)

at different points in the checkpoints of an app's evolution, (ie, ~releases) we have

maybe, when we started out, we had to deliver experiences, which were both

the actual domain will always be

so typically apps have to start out simple for those reasons.

sidnt commented 4 years ago

now tomorrow, we have newer capacities, newer understanding, skill, sophistication, resources and bandwidth. and we want to author newer experiences. (while utilising the great power of creativity, responsibly, and in sustainable manners.)

we are going to capture new information. in our domain model. we have to update the schema. we want our application to be aware of this new modification to our app's state structure. so that we can define a correspondingly modified, domain logic. so that we can author correspondingly newer, experiences.

eg, in v1, on Person we didn't have age. now, in v2, we have added age field on Person. such slippery situations could arise here:

given these partial information, if inconsistent operations are wired in, open to be invoked at runtime, at best, our application would throw, and worse still, be running with unknown semantics, corrupting data, leaking handles, etc.

if semantics are well defined, ie,

we could have multiple versions of our app, functional in the wild, simultaneously, each would just command data corresponding to its own schema, and the UI would be corresponding to that as well. given this isolation, different versions of the app could still redesign their ux in the same schema still. i could even downgrade, or just not make the switch, from a user experience side of things. and migrations could be much more planned.

sidnt commented 4 years ago

we want to be able to run alongside old schema. with well defined semantics. and data well addressable (ideally, automatically) across shards.

some structure might be shared between both the versions. even though the data objects live in different shards, pivoted on the corresponding schema version. eg,

by automatic addressing, we mean, that the schema is a part of the addressing scheme. the application will automatically derive, which shard to reach out into, depending on the schema version used.

eg, on similar lines, in lmdb, could we dedicate one dbi, to one schema version, and can parameterise lmdb txns with this information, such that it automatically indexed into the correct dbi?

\ \ if we can articulate this structure shared between shards, then we can then define typing relationships between shards. eg a Shard[V] type.¹ \ \ ref: liskov substitution principle

\ \ V >: U and U <: V, are the same


¹ heck, we can even define DistributedShard[V, L] where V represents the app's schema, and L represents the latency, given the shard is distributed across memory, disk, network, so that we can author semantics, contextual to data distance.

sidnt commented 4 years ago

so now, can we arrive at a variance for the type V? which one is it?

\ \

aside: real paradigm shift here, is when we come to understand that we can create abstractions, and apply meaning to them. iow, we can design novel semantics at will.

sidnt commented 4 years ago

Now this model, needs to scope well. To a single host. First. Putting distributed computation into the mix would need more elucidation.