This PR finishes the onboarding of the ML inference processor as an available processor to add and configure, and produce a usable final workflow template. Specifically:
adds a Processors clickable context menu to select available processor types when adding a processor on the UI. Currently just ML inference processor type is supported
finishes template support of creating a single ingest pipeline step from a dynamic list of processors of all types (previously stubbed to be a single ML processor)
cleans up configs/ interfaces and implements toObj() to easily convert processors into valid fields within WorkflowConfig without extra code
Demo showing the new button and the dynamic addition of multiple processors:
[x] Commits are signed per the DCO using --signoff
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.
For more information on following Developer Certificate of Origin and signing off your commits, please check here.
Description
This PR finishes the onboarding of the ML inference processor as an available processor to add and configure, and produce a usable final workflow template. Specifically:
Processors
clickable context menu to select available processor types when adding a processor on the UI. Currently just ML inference processor type is supportedconfigs/
interfaces and implementstoObj()
to easily convert processors into valid fields withinWorkflowConfig
without extra codeDemo showing the new button and the dynamic addition of multiple processors:
screen-capture (41).webm
Issues Resolved
Makes progress on #23
Check List
--signoff
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license. For more information on following Developer Certificate of Origin and signing off your commits, please check here.