Open petey opened 6 years ago
I worked on a POC for a JSON:API endpoint. I am using the waterline ORM for defining models and waterline-jsonapi to convert model data to a json:api response. This seems to work incredibly well, and decorating the event model with the build relationships, and includes (and vice versa) is just a few lines of code.
The Waterline ORM has a lot of nice faculties out-of-the box, from a full-featured query language, pagination support, support for multiple simultaneous data stores, built-in data validation, and model associations. It is also can be configured to create and alter database tables, FWIW, so our schema updates could potentially be automatically applied. Further testing is necessary to ensure we don't encounter data loss for these operations.
Waterline JSONAPI component works hand-in-hand with the Waterline model definitions and results to automatically generate a json:api 1.0 spec output.
I also tried to work with jsonapi-server, jsonapi-store-postgres to build out the same POC. I found that the api models were a little too prescriptive to be compatible with our existing data set. Additionally, this spins up its own express-based server, meaning we would have to start from square one where all the rest of our plugins are concerned, e.g. authentication/authorization. I didn't dive too deeply into the documentation to see how this would be configured or how much community middleware would be supported.
Waterline does appear to provide a similar set of abstractions that we have today for our current structure. Waterline Connections is equivalent to our datastore, and there is a notion of models, and schema. Unfortunately, Waterline's schema/model configuration is drastically different from what we have today, though it is perhaps simpler than Joi. If we decide to move forward with this, we would need to make drastic changes, or completely new modules to support the Waterline way of implementing our datasets.
The jsonapi-serializer module is kind of the ancestor of all json:api implementations in nodejs. This is a low-level module whose only responsibility is to transform to/from jsonapi. The benefit of this approach is that we could wrap the serializer to handle our existing sequelize models. The drawback, when compared with Waterline, is that the model relationships can not be handled automatically, so populating relationships is more work, as is the definition of these relationships for the serializer to operate on them.
I've pushed up the POC I've been playing with here: https://github.com/screwdriver-cd/screwdriver/compare/jsonapi
The Screwdriver UI uses ember-data's REST interface to talk to most of our api endpoints. Due to early decisions made when producing the API, the UI has had to make many concessions to support the nature of our API not conforming with ember-data's expectations. Some of these concessions make dealing with the API quite cumbersome.
One of the largest examples of this is ember-data expects a REST response payload to have a key that defines the resource type. For example:
{ events: [{...}, {...}] }
. This would tell ember-data to use the event model's adapter/serializer for handling elements of this payload. We currently do not provide any keys that define the relationship of the payload to the resource or models it contains. Our current payload is[{...}, {...}]
. Since the models themselves do not describe what they are either, this results in us handling all transactions with a the global application adapter/serializer, and make guesses based on the request url of what the payload will have in it.Secondly, this prevents us from being able to bundle multiple resources in one payload. Using keys in the payload would allow the users of the api to process payloads like:
{ events: [{...}, {...}], builds: [{...}, {...}] }
. This would allow them to make one call to fetch both the parent and child models of a relationship in one call, and process these appropriately.These issues are compounding as the UX requirements become richer. The event model becomes a significant example of this issue. For every event the UI retrieves, it must make one or more subsequent calls to fetch all the build data associated with that event. It uses this to calculate what the overall status of the event is, whether or not the event is done running, and how to display build status in a graph. The asynchronous nature of making a additional calls for this data results makes this calculation and use of the calculated data difficult. There are very few cases in which the UI would need to retrieve an event and not need at least some of the data that is calculated from the status of its builds. We can take this further when looking at pipelines as a whole, where we might calculate the health of the pipeline based on the status of its most recent event.
We need support for pagination. A request to get events for a pipeline retrieves all events, which means we must now potentially retrieve all builds for that pipeline. The current payload does not provide any affordances to even supply metadata about pagination outside of response headers. The UI has no need to display or get information about every event in a pipeline at one time. It does, however, need to know information about the last, last completed, and last successful event in the pipeline.
Absolutely Must Do:
Possible Resolutions:
Considerations: