atomist-attic / rugs

DEPRECATED Surface area of Rug, including utility classes
Apache License 2.0
1 stars 1 forks source link

Improve the developer experience of using cortex #4

Open Lawouach opened 7 years ago

Lawouach commented 7 years ago

With the recent mapping in TS of the model entities and relationships, we now can nicely tests and that's a great win.

However, much like path expressions are direct translation of the model behind it, the TS interfaces are very much coupled to cortex too. For instance, to send a DM when a Kube pod failed, this is what we need to write in the handler:

const pod: Pod = event.root()
pod.uses().isTagged().onCommit().author().of().hasChatIdentity().id()

which can unfurl to:

const pod: Pod = event.root()
const container = pod.uses()
const tag = container.isTagged()
const commit = tag.onCommit()
const author = commit.author()
const person = author.of()
const chatIdentity = person.hasChatIdentity()
const chatId = chatIdentity.id()

Although this looks more readable, it's not really intuitive nor transparent:

I think we should benefit greatly if we created a TS interface that makes sense for the user and transparently map those calls to what the model requires. For instance:

pod.project().channel().id()

This really makes more sense and abstracts away the model. This can be TS only building on top of the cortex interfaces.

jessitron commented 7 years ago

Hmmm. Backwards compatibility with model changes - this is a good point. We need to know what we're committing ourselves to.

These rugs depend on a particular version of the npm module, transpiled from a particular version of our model (which is not carefully specified as a range in the manifest like the rug version is)

The path expression queries run in production. They get turned into Cypher.

We're publishing our database schema. We are coupling the structure of our data in our database to code we don't even control. Microservices are important because they encapsulate data behind a bounded context; in microservices we don't reach into other teams' data, we don't create that coupling, we make an API instead.

We are at the opposite extreme. Not only is our database not encapsulated behind one service, its schema is exposed externally, we're creating dependencies on its precise structure in our customers' services.

What is it going to look like to make these rugs continue to execute when anything is different in our database schema? anywhere in our database schema, since we transpile and publish the entire thing.

johnsonr commented 7 years ago

It would be interesting to re-evaluate this issue with the present state of Cortex and property access.

Regarding the issue of exposing our DB schema: Path expressions and query by example successfully abstract us from Cypher and Neo. A graph model has low impedance mismatch wrt an OO language, compared to an RDBMS etc. However, the question does arise about what happens as the model changes.

Given that the entities are pure data, we could transform them easily enough in TypeScript. If the transformation was 2 way, query by example could still work.

Alternatively, the transformation could occur in the Cypher generation layer, which could adapt the inputs and use projection returns.

Lawouach commented 7 years ago

There is a rather long term question here indeed. I think the work poured into bridging both worlds, in the last couple of weeks, is fantastic. The Rug development side is getting there in terms of developer experience. It's much nicer now. That's a major win I believe.

I think we have the right basic bricks for Atomist or the community to come up with higher level TS interfaces if the need arises.

Now, should the neo model be specified, versioned and documented so that users can trust it? But then, as a developer of Rugs, it means I now have to worry about two interfaces not knowing which one matters to me.

However, as our TS bricks are a mapping of the neo model, we currently have a deep coupling between the two. Should we promote the TS interface we have now in cortex has THE API and, no matter what the model evolves into, Rug developers should only care for the TS interface? Atomist making the promise the cortex module would hide away changes in order not to break their Rugs?

But what if the model changes in a way that means events do not actually match anymore? Even if the cortex module abstracts away that change and users have their Rugs tests passing still, can we give them confidence those handlers will still work?

In other words, for event handlers at least, how do we let the model and Rugs evolve nicely together (we can of course automate some of the changes and submit appropriate PRs but that goes beyond that mechanic).

This also relates to the discussion raised recently around "my Rug tests are passing but event handlers don't seem to trigger". Improving logging and the discovery of what needs to be changed to accommodate the model would go already a long way.