Open hsushmitha opened 4 days ago
Is this about the Trino Helm chart? If yes, can you include the values to reproduce this?
it is about Trino Helm Chart. Attaching deployment config and values file for reproducing the issue.
Which chart version you're using? How do you apply the changes you included in deployment-*.txt
files?
In the latest chart version, you have to set coordinator.terminationGracePeriodSeconds
and worker.terminationGracePeriodSeconds
. See https://trinodb.github.io/charts/charts/trino/
we are using helm chart version: trino-0.8.0
we do helm upgrade trino . -f values.yaml -n trino
and deploy the changes. the above attached files are yaml files.. since we couldn't attach yaml files we attached txt file version.
That's very old. I don't know how the chart was structured back then, and I can't help anymore. Can you try using the latest version?
we have set
terminationGracePeriodSeconds
to 300s in trino coordinator and worker nodes. during autoscaling when the number of worker pods increase and decrease, pods terminate instantly without waiting for the queries in the pod to terminate. we have setshutdown.grace-period=300s
in trino cooridnator and worker also. Expectation is the trino worker pods must wait for 300sec untill tasks in the worker complete instead of terminating instantly.we have set
starburstWorkerShutdownGracePeriodSeconds: 300
which corresponds toshutdown.grace-period=300s
anddeploymentTerminationGracePeriodSeconds: 300
which corresponds toterminationGracePeriodSeconds
in starburst and the worker pods terminate after 300sec waiting for query tasks to run to completion as expected.