Closed azriel91 closed 10 months ago
We want interruptions.
CmdBase
should handle receiving interruptions, and sending to interrupt_tx
.
interrupt_rx
needs to be passed to *Cmd
s that iterate over items, so that the iterator can be interrupted.
CmdBase
should probably instantiate the (interrupt_tx, interrupt_rx)
channel.
Should OutputWrite
be the trait to interact with the outside world?
If receiving interruptions is in the same trait, then:
CmdBase
, meaning the OutputWrite
implementation needs to handle that.If receiving interruptions is in a different trait, then
OutputWrite
implementation.How do the output endpoints we know also receive input?
Endpoint | Input | Output |
---|---|---|
CI | process signals only | stdout (append), logs |
CLI | stdin | stdout (interactive / append) |
Web API | web request | HTTP response |
WASM | function call | function call |
How would it look like in code:
framework CmdBase
:
ProgressRender
holds output
to submit progress.fn_exec
must race with interrupt_rx.recv()
.Means, the developer needs to pass in something that produces interrupt_tx, interrupt_rx
, and CmdBase
calls that generator function, and passes interrupt_rx
to the *Cmd
.
framework / implementor Endpoint
:
impl ProgressEndpoint for CliEndpoint {
async fn progress_begin(&mut self) {}
async fn progress_update(&mut self) {}
async fn progress_end(&mut self) {}
}
impl OutputEndpoint for CliEndpoint {
async fn present(&mut self) {}
}
impl InputEndpoint for CliEndpoint {
async fn interrupt_channel(&mut self) -> Receiver<InterruptSignal> {}
}
impl IntoEndpoint for CliEndpoint {
async fn into_endpoint(self) -> Endpoint {}
}
/// Cloneable, so developers can choose whether or not this endpoint
/// is used for input/progress/outcome output.
///
/// Held by the Peace framework.
struct Endpoint {
output_tx: Sender<Box<dyn Presentable>>,
progress_tx: Sender<ProgressUpdate>,
interrupt_rx: Receiver<InterruptSignal>,
}
developer:
// framework:
// CmdBase - needs `output` only for progress
// developer, one of:
let cmd_ctx_builder = CmdCtx::builder_x()
.with_input(input)
.with_output(output)
.await?;
let cmd_ctx_builder = CmdCtx::builder_x()
.with_endpoint(endpoint) // only one endpoint?
.await?;
Should we support multiple input / output endpoints?
We may want to have executions / progress / telemetry from a single request, so multiple output endpoints is a plausible use case.
Input may be interrupted by a user, a failsafe by an automation infrastructure maintainer, or by inbuilt rate limiting safe-guards, so multiple input endpoints is plausible.
Would all interruptions be through the same input endpoint?
interrupt_tx
must be acquirable / accessible from another thread, as interruption would be another request.interrupt_tx
needs to be stored in memory.interrupt_tx
per command context is convenient -- otherwise we would have to poll multiple interrupt_rx
s -- one for each endpoint.SIGINT
on an automation web server (and interrupt all processes).To support interruptions from a separate task, we need to:
SIGINT
), but we take it to mean we will interrupt all executions.Would all command progress / outcome be written through the same output endpoint?
Should we refactor the codegen crate now?
We should do it when we've settled on a design.
1 Web server in this context means a web API, e.g. a REST API or a server function in leptos.
2 WASM here means pure WASM automation, independent of rendering -- which could be client side rendering (CSR) or server side rendering (SSR). The WASM automation could still send information to the server for telemetry purposes.
CmdCtx
is used to execute one or more *Cmd
s.Cmd
to be executed, with X parameters.Add CmdBlock
, which is one "iterator operation" for all items -- one of: discovering items, or cleaning up items, or ensuring items. This is a genericized *Cmd::exec
call.
Each CmdBlock
could have errors per item, and it also could have an error for the full block.
Example, state discovery may fail to discover state for one or more items, or it could fail to serialize states to storage.
Add CmdExecution
, which is the "full command", which discovers, cleans, and ensures. This is a Vec<CmdBlock>
.
This is a queue of CmdBlock
s to execute, and so it creates the interrupt_rx
to pass to the CmdBlock
, and itself holds the interrupt_tx
to send the interruption signal.
Developers will call CmdExecution::interrupt
to interrupt the current execution, which propagates down to the CmdBlock
.
CmdExecution
is a long lived item, which can be await
ed until it is complete.
Pass an Sender<CmdBlock>
to CmdCtx
, so that *Cmd
s can send in an CmdBlock
.
Developers need to either:
*Cmd::exec
s, which wait for the outcome, then return.*Cmd::exec_bg
s, which return an execution ID.CLI invocation will use *Cmd::exec
, which will race:
await
ing the CmdExecution
's completion, then return the Outcome
.Add an CmdExecutionMap
, which holds all executions for the process. This is an IndexMap<ExecutionId, CmdExecution>
.
There should be a channel for CmdExecRequest
: (cmd_exec_request_tx, cmd_exec_request_rx)
.
Executions will be on the thread that holds the map, and maybe they need to be spawned on that map as well(?).
Web server will be polled for progress which returns "progress or done", then later will be queried for Outcome
.
Web server request for an interruption will call the CmdExecution
to interrupt the execution. CmdExecution
will return "interruption request received" or "nothing to interrupt".
Subsequent polling of progress will automatically discover the interruption.
CmdExecution
will use channels behind the scenes, so that CmdBlock
s and the web server can hold onto &CmdExecution
without needing to request a lock.
WASM app will have progress / outcome pushed to it, which it then can use to update the UI -- e.g. for leptos CSR (create_effect
), or SSR (create_resource
, with server function).
Enables users to interrupt an execution through resilience.
Store states for items that have been applied.
Cancellation points within item
apply_exec()
may be deferred.Looked at:
Finomnis/tokio-graceful-shutdown
: not suitable forApplyCmd
-- this is more for shutting down a long running service.jonhoo/stream-cancel
: Looks like it would work, we need to capture the interruption signals ourselves. See how this is done intokio-graceful-shutdown/_/signal_handling.rs
plabayo/tokio-graceful
tokio/topics/shutdown