I can update the pipeline spec without waiting for a "Failed" pipeline to be "Paused"
I can update NumaflowController and ISBService specs if Pipelines are either "Paused" or "Failed"
Verification
unit test checks ISBService (same code is used for Numaflow Controller so presumably that is being verified at the same time)
manually tested this sequence: create bad pipeline which failed validation; attempted to update failed pipeline with a change which didn't fix the problem; updated failed pipeline by fixing the validation error
manually tested updating ISBServiceRO and NCRO while pipeline was failed
Modification 2: Fix Pipeline Condition checking for PPND
Was accidentally checking the PipelineRollout.Conditions rather than Pipeline.Conditions when waiting for the Pipeline to be fully reconciled.
Verification
I observed in the log that at the time we resumed the Pipeline all of the Child Conditions were in a healthy state. I also observed that the daemon was up and running.
Modification 3: e2e test
Have incorporated latest Numaflow release into e2e test.
Have increased suite timeout and added a comment in case we need to increase further in the future.
Have fixed the Makefile which was setting environment variable for the test related to bringing up a separate kubernetes environment, which we aren't using.
Fixes #230
Modifications
Modification 1: Don't pause "Failed" pipelines:
Verification
Modification 2: Fix Pipeline Condition checking for PPND
Was accidentally checking the
PipelineRollout.Conditions
rather thanPipeline.Conditions
when waiting for the Pipeline to be fully reconciled.Verification
I observed in the log that at the time we resumed the Pipeline all of the Child Conditions were in a healthy state. I also observed that the daemon was up and running.
Modification 3: e2e test
Have incorporated latest Numaflow release into e2e test. Have increased suite timeout and added a comment in case we need to increase further in the future. Have fixed the Makefile which was setting environment variable for the test related to bringing up a separate kubernetes environment, which we aren't using.