bbcarchdev / twine

An RDF workflow engine
https://bbcarchdev.github.io/twine/
Apache License 2.0
8 stars 3 forks source link

Replace pre-processor → PUT → post-processor with configurable workflow #23

Closed nevali closed 8 years ago

nevali commented 9 years ago

Currently a graph update triggers a fixed workflow of:

Both the pre- and post-processors use the same function signature, which is an int-returning callback accepting a twine_graph and module-defined void * data pointer.

To allow for more flexible workflows, this could be replaced by a configurable sequence of processors to invoke in a defined order. For compatibility:

All registered processors can be tracked in a flat list in a common namespace, with an automatic pre: and post: prefix applied to modules registered using the legacy pre- and post-processor registration APIs. Modules would be expected to migrate away from registering pre- and post-processors specifically once deployed applications no longer make use of the preprocess and postprocess special-case workflow entries, with the specific point at which they're invoked dictated exclusively by the configured workflow. In the interim, they would be free to register a module using both the legacy registration APIs as well as the new unified processor registration API (for example, a pre-processor named example would registered as both pre:example and example—the former being invoked by the preprocess special identifiers, and example available for direct invocation by name).

nevali commented 8 years ago

There will need to be a sparql-get module defined for populating the old model within the graph object.