Open lassik opened 4 years ago
Well, something can be built do it automatically. Maybe not for every commit, but something.
You can also see the webhook that Docker Hub behind the scenes here: https://github.com/scheme-containers/loko/settings/hooks
The Scheme API backend (set to appear at https://github.com/schemedoc/borg "real soon now") will have a generic cron-like scheduler to watch for changes to arbitrary URLs and propagate the latest version of each such URL into a processor graph (where each processor is an arbitrary side-effect-free Scheme procedure that gets input from another graph node and passes its output onto any node(s) that are listening to it).
It would be ideal if the API backend could listen to all implementations' repos, and anyone who wants to react to implementation commits can simply listen to one endpoint in the API for broadcasts of all implementations (or only a chosen subset).
I'm not sure how good GraphQL is at subscriptions, but not all of the API has to be GraphQL.
The current idea is that the Scheme API would be backed by a simple key-value store. Each graph node can listen to particular keys and be triggered whenever one or more of those keys gets a new value. In turn, it specifies one or more output keys for which it produces new values. As far as I can tell, this simple system can do everything needed, but I'm not 100% sure.
A URL listener would be a special graph node that gets its input from a URL (on daily, weekly, etc. basis) instead of getting it from another key in the store.
The store could just be a directory of files (where the key is the filename), or a SQLite table etc. Shouldn't matter all that much since we're unlikely to have more than 100 MiB of data to start with.
One could layer a Functional Reactive Programming framework on top of the graph, with map and filter and such, but that's probably overengineering.
(Subtask of #1; discussion continued from https://github.com/scheme-containers/loko/issues/2)
@weinholt:
@lassik:
@weinholt:
@lassik: