Refinery 2.8 attempts to drain data to remaining peers on shutdown, and has a configurable ShutdownDelay field where it tries to process remaining work while shutting down. In k8s we need this config option to be inline with terminationGracePeriodSeconds to ensure k8s doesn't kill pods early.
it's the same change as #360
Short description of the changes
added terminationGracePeriodSeconds configuration option in values.yaml
set ShutdownDelay to be terminationGracePeriodSeconds - 5 seconds by default. Users can override this value in their values.yaml
Which problem is this PR solving?
Refinery 2.8 attempts to drain data to remaining peers on shutdown, and has a configurable ShutdownDelay field where it tries to process remaining work while shutting down. In k8s we need this config option to be inline with terminationGracePeriodSeconds to ensure k8s doesn't kill pods early.
it's the same change as #360
Short description of the changes
-hnyinternal
tag for the changeHow to verify that this has the expected result
Test localling using kind