Open Guillermogsjc opened 2 years ago
specifically, the NAME
on the UI is kargentina-1-279wc.s2t-core-lower.wait-main-get-logs(0)
, the POD NAME
in the UI is kargentina-1-279wc-931539070
but the pod name inside k8s is kargentina-1-279wc-wait-main-get-logs-931539070
, so argo server is disconnected from what is happening in K8S (workflow controller is ok because the stage finishes so I guess it is tracking the correct pod name)
logs of workflow controller https://gist.github.com/Guillermogsjc/710bd617eae5e4eee0396dc3fc581d68
obtained through kubectl logs -n argo deploy/workflow-controller | grep kargentina-1-279wc
the most weird thing is that I have a parallel workflow created from the same templates with aligned pod name
@JPZ13 Can you look at this issue?
@isubasinghe @rohankmr414 Can I get one of you two to look into this? I'll be available to review it Wednesday or Friday if y'all can get a fix out before then
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. If this is a mentoring request, please provide an update here. Thank you for your contributions.
Pre-requisites
:latest
What happened/what you expected to happen?
hi, from
v3.4.3
, the fieldPOD_NAME
in the UI is not correlated with the pod created in k8s in some cases.As an example:
foo-lg77s
foo-lg77s.bar
foo-lg77s.bar.my-stage
foo-lg77s.bar.my-stage(0)
In this example:
foo-lg77s-3278223386
.foo-lg77s-my-stage-3278223386
Somewhere over there a mismatch between names of created pod names in task of tasks of DAGs is rising so the UI will not show the logs.
Not happening with
<v3.4.3
(if i remember well)Version
v3.4.3
Paste a small workflow that reproduces the issue. We must be able to run the workflow; don't enter a workflows that uses private images.