Microfiche is an experimental re-write of Microcosm's internal snapshot utilities. It's trying to solve a few problems:
Fine-grain data subscriptions. Presenter models recalculate too much because they are too vague. I only want to recalculate them when their underlying state actually changes.
It would be cool to track how values change over time. What action was responsible for changing an author's name? How did that value change over the last 30 seconds?
Being able to tell exactly what changed. This is essential for item 1 (subscriptions).
It could eventually replace the existing method of storing internal state as snapshots, basically a bunch of iterations of a Microcosm's state as actions are pushed into it.
How?
Microfiche stores state as key paths in objects. I'm calling these key paths facts:
var changeset = {
"author/1/name": "Billy Booster",
"author/1/email": "billy@booster.com"
}
A changeset is a group of facts. They use prototypal inheritance to share data with ancestors, so each changeset does not need to clone all prior state (which is a problem with our current snapshot system).
If we were to update Billy's name, that would look like:
var changeset = {
"author/1/name": "Billy Booster",
"author/1/email": "billy@booster.com"
}
var next = Object.create(facts)
next['author/1/name'] = 'Billy Idol'
We can "undo" a specific change by simply using delete:
And to "collapse" a changeset chain for efficiency purposes, we just shallow clone the most current changeset:
let compressed = {}
for (var key in changeset) {
compressed[key] = changeset[key]
}
☝️ Now compressed has all facts from the entire prototype chain without any of the prototypes. Technically, lots of prototypes gets slow, so this helps with that. Still, it gets slow at ~15k, which is approximately 14,500 more points of history than we've ever used on a project.
Aside: This is pretty geeky, but I love leaning on the language like this. Very rarely do we get to say that about JS 😸.
Changesets are managed by a database. Databases are responsible for writing new changesets and pulling out informaiton. I'm still not happy with numerating over records, but my current thoughts are to provide a key path and spit out a value every time it matches a key:
This is particularly useful with GraphQL, and sort of the whole point of all of this. Assuming we have a query like this:
{
author(id: 1) {
name
email
}
We can do a few things:
1) Convert the query into DB.get('author', '1', ['name', 'email'])
2) Create a subscription to the following facts:
author/1
author/1/name
author/1/email
That provides us extremely fine grain data subscription (which needs to be built out), allowing us to use a push model of sending updates to Presenters, instead of having them completely recalculate their models every time any possible change is made. This is tremendously more efficient.
We don't have the T (transaction). I am curious if we could use actions as the identifier for this. In that case, I don't know that we need to use prototypal inheritance. Something to figure out later.
Also Datomic stores facts with multiple indexes. If we did this, we'd be able to enumerate over different types of data, like:
// AEVT (attribute, entity, value, transaction)
DB.each('name/author', function(path, value) {
// every single author name
})
Microfiche is an experimental re-write of Microcosm's internal snapshot utilities. It's trying to solve a few problems:
It could eventually replace the existing method of storing internal state as snapshots, basically a bunch of iterations of a Microcosm's state as actions are pushed into it.
How?
Microfiche stores state as key paths in objects. I'm calling these key paths facts:
A changeset is a group of facts. They use prototypal inheritance to share data with ancestors, so each changeset does not need to clone all prior state (which is a problem with our current snapshot system).
If we were to update Billy's name, that would look like:
We can "undo" a specific change by simply using
delete
:And to "collapse" a changeset chain for efficiency purposes, we just shallow clone the most current changeset:
☝️ Now compressed has all facts from the entire prototype chain without any of the prototypes. Technically, lots of prototypes gets slow, so this helps with that. Still, it gets slow at ~15k, which is approximately 14,500 more points of history than we've ever used on a project.
Changesets are managed by a database. Databases are responsible for writing new changesets and pulling out informaiton. I'm still not happy with numerating over records, but my current thoughts are to provide a key path and spit out a value every time it matches a key:
This is fuzzy, so you could also just enumerate over every author, like:
There is also a utility for efficiently getting information for a specific record:
This is particularly useful with GraphQL, and sort of the whole point of all of this. Assuming we have a query like this:
We can do a few things:
1) Convert the query into
DB.get('author', '1', ['name', 'email'])
2) Create a subscription to the following facts:
author/1
author/1/name
author/1/email
That provides us extremely fine grain data subscription (which needs to be built out), allowing us to use a
push
model of sending updates to Presenters, instead of having them completely recalculate their models every time any possible change is made. This is tremendously more efficient.Inspiration and follow up work
This is heavily influenced by Datomic's EAVT index (Entity Attribute Value Transaction).
We don't have the
T
(transaction). I am curious if we could use actions as the identifier for this. In that case, I don't know that we need to use prototypal inheritance. Something to figure out later.Also Datomic stores facts with multiple indexes. If we did this, we'd be able to enumerate over different types of data, like: