Closed wesbragagt closed 3 months ago
Can you provide the spec of the sub workflow as well as the logs from the pod?
@fullykubed spec submitted
metadata:
name: bpd-implentio-api
namespace: cicd
uid: e2b85774-11c9-4c80-a682-d5dfb5a2a315
resourceVersion: '59926267'
generation: 2
creationTimestamp: '2024-08-14T21:35:13Z'
labels:
id: bpd-implentio-api-8b4ba5b2878d6916
panfactum.com/environment: development
panfactum.com/local: 'false'
panfactum.com/module: wf_spec
panfactum.com/prevent-lifetime-eviction: 'true'
panfactum.com/region: us-west-2
panfactum.com/root-module: implentio_cicd
panfactum.com/scheduler: 'true'
panfactum.com/stack-commit: d60fefd99d4be7a50c582ee25dfcb00976762f4c
panfactum.com/stack-version: edge.24-07-08
panfactum.com/workload: bpd-implentio-api
managedFields:
- manager: kubectl
operation: Apply
apiVersion: argoproj.io/v1alpha1
time: '2024-08-14T21:36:07Z'
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:labels:
f:id: {}
f:panfactum.com/environment: {}
f:panfactum.com/local: {}
f:panfactum.com/module: {}
f:panfactum.com/prevent-lifetime-eviction: {}
f:panfactum.com/region: {}
f:panfactum.com/root-module: {}
f:panfactum.com/scheduler: {}
f:panfactum.com/stack-commit: {}
f:panfactum.com/stack-version: {}
f:panfactum.com/workload: {}
f:spec: {}
spec:
templates:
- name: build-template
inputs: {}
outputs: {}
metadata: {}
resource:
action: create
manifest: |
"apiVersion": "argoproj.io/v1/alpha1"
"kind": "Workflow"
"metadata":
"generateName": "build-"
"namespace": "cicd"
"spec":
"arguments":
"parameters":
- "name": "git_ref"
"value": "main"
"workflowTemplateRef":
"name": "build-implentio-api"
successCondition: status.phase == Succeeded
failureCondition: status.phase == (Failed, Error)
- name: deploy-template
inputs: {}
outputs: {}
metadata: {}
resource:
action: create
manifest: |
"apiVersion": "argoproj.io/v1/alpha1"
"kind": "Workflow"
"metadata":
"generateName": "deploy-"
"namespace": "cicd"
"spec":
"arguments":
"parameters":
- "name": "git_ref"
"value": "main"
"workflowTemplateRef":
"name": "deploy-implentio-api"
successCondition: status.phase == Succeeded
failureCondition: status.phase == (Failed, Error)
- name: entry
inputs: {}
outputs: {}
metadata: {}
dag:
tasks:
- name: build
template: build-template
arguments: {}
- name: deploy
template: deploy-template
arguments: {}
dependencies:
- build
entrypoint: entry
arguments: {}
serviceAccountName: bpd-implentio-api-6625c2ee8a8a0054
suspend: false
affinity:
nodeAffinity: {}
dnsPolicy: ClusterFirst
ttlStrategy:
secondsAfterCompletion: 3600
secondsAfterSuccess: 3600
secondsAfterFailure: 3600
activeDeadlineSeconds: 86400
schedulerName: panfactum
podGC:
strategy: OnWorkflowCompletion
labelSelector: {}
deleteDelayDuration: 3m0s
securityContext:
runAsUser: 1000
runAsGroup: 1000
runAsNonRoot: false
fsGroup: 1000
fsGroupChangePolicy: OnRootMismatch
synchronization:
semaphore:
configMapKeyRef:
name: bpd-implentio-api-6625c2ee8a8a0054-sync
key: workflow
volumeClaimGC:
strategy: OnWorkflowCompletion
retryStrategy:
limit: 5
retryPolicy: Always
backoff:
duration: 30s
factor: 2
maxDuration: 3600s
podMetadata:
annotations:
karpenter.sh/do-not-disrupt: 'true'
labels:
id: bpd-implentio-api-8b4ba5b2878d6916
panfactum.com/environment: development
panfactum.com/local: 'false'
panfactum.com/module: wf_spec
panfactum.com/prevent-lifetime-eviction: 'true'
panfactum.com/region: us-west-2
panfactum.com/root-module: implentio_cicd
panfactum.com/scheduler: 'true'
panfactum.com/stack-commit: d60fefd99d4be7a50c582ee25dfcb00976762f4c
panfactum.com/stack-version: edge.24-07-08
panfactum.com/workload: bpd-implentio-api
archiveLogs: true
workflowMetadata:
labels:
id: bpd-implentio-api-8b4ba5b2878d6916
panfactum.com/environment: development
panfactum.com/local: 'false'
panfactum.com/module: wf_spec
panfactum.com/prevent-lifetime-eviction: 'true'
panfactum.com/region: us-west-2
panfactum.com/root-module: implentio_cicd
panfactum.com/scheduler: 'true'
panfactum.com/stack-commit: d60fefd99d4be7a50c582ee25dfcb00976762f4c
panfactum.com/stack-version: edge.24-07-08
panfactum.com/workload: bpd-implentio-api
I believe that is the manifest of the top-level workflow, not the sub workflow.
@fullykubed regarding logs these are all I could tail from the pod
{"argo":true,"level":"info","msg":"capturing logs","time":"2024-08-14T22:00:07.241Z"}
time="2024-08-14T22:00:07.270Z" level=info msg="Starting Workflow Executor" version=v3.5.5
time="2024-08-14T22:00:07.272Z" level=info msg="Using executor retry strategy" Duration=1s Factor=1.6 Jitter=0.5 Steps=5
time="2024-08-14T22:00:07.272Z" level=info msg="Executor initialized" deadline="2024-08-14 22:58:53 +0000 UTC" includeScriptOutput=false namespace=cicd podName=bpd-implentio-api-bsfc5-build-template-2144552081 templateName=build-template version="&Version{Version:v3.5.5,BuildDate:2024-02-29T20:59:20Z,GitCommit:c80b2e91ebd7e7f604e88442f45ec630380effa0,GitTag:v3.5.5,GitTreeState:clean,GoVersion:go1.21.7,Compiler:gc,Platform:linux/amd64,}"
time="2024-08-14T22:00:07.296Z" level=info msg="Loading manifest to /tmp/manifest.yaml"
time="2024-08-14T22:00:07.296Z" level=info msg="kubectl create -f /tmp/manifest.yaml -o json"
@fullykubed I'm not sure how to get to the sub workflow.
@fullykubed Perhaps this manifest you mean?
name: build-template
inputs: {}
outputs: {}
metadata: {}
resource:
action: create
manifest: |
"apiVersion": "argoproj.io/v1/alpha1"
"kind": "Workflow"
"metadata":
"generateName": "build-"
"namespace": "cicd"
"spec":
"arguments":
"parameters":
- "name": "git_ref"
"value": "main"
"workflowTemplateRef":
"name": "build-implentio-api"
successCondition: status.phase == Succeeded
failureCondition: status.phase == (Failed, Error)
Can you share the manifest of the pod?
pod manifest
apiVersion: v1
kind: Pod
metadata:
annotations:
karpenter.sh/do-not-disrupt: "true"
kubectl.kubernetes.io/default-container: main
linkerd.io/created-by: linkerd/proxy-injector edge-24.5.1
linkerd.io/inject: enabled
linkerd.io/proxy-version: edge-24.5.1
linkerd.io/trust-root-sha256: 3b05529c8b2d7e76636622d81334fa3f61af673234377910b8fbef4b53103b65
workflows.argoproj.io/node-id: bpd-implentio-api-lls4g-3239905087
workflows.argoproj.io/node-name: bpd-implentio-api-lls4g(0).build(0)
creationTimestamp: "2024-08-14T22:14:47Z"
labels:
id: bpd-implentio-api-8b4ba5b2878d6916
linkerd.io/control-plane-ns: linkerd
linkerd.io/workload-ns: cicd
panfactum.com/environment: development
panfactum.com/local: "false"
panfactum.com/module: wf_spec
panfactum.com/prevent-lifetime-eviction: "true"
panfactum.com/region: us-west-2
panfactum.com/root-module: implentio_cicd
panfactum.com/scheduler: "true"
panfactum.com/stack-commit: d60fefd99d4be7a50c582ee25dfcb00976762f4c
panfactum.com/stack-version: edge.24-07-08
panfactum.com/workload: bpd-implentio-api
workflows.argoproj.io/completed: "false"
workflows.argoproj.io/workflow: bpd-implentio-api-lls4g
name: bpd-implentio-api-lls4g-build-template-3239905087
namespace: cicd
ownerReferences:
- apiVersion: argoproj.io/v1alpha1
blockOwnerDeletion: true
controller: true
kind: Workflow
name: bpd-implentio-api-lls4g
uid: 29ed06b1-9745-47f1-ba1c-9475afca6900
resourceVersion: "59963519"
uid: 6d61765a-adad-4357-9c5d-df0b3bb68d3d
spec:
activeDeadlineSeconds: 86399
affinity:
nodeAffinity: {}
containers:
- command:
- /var/run/argo/argoexec
- emissary
- --loglevel
- info
- --log-format
- json
- --
- argoexec
- resource
- create
env:
- name: ARGO_POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: ARGO_POD_UID
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.uid
- name: GODEBUG
value: x509ignoreCN=0
- name: ARGO_WORKFLOW_NAME
value: bpd-implentio-api-lls4g
- name: ARGO_WORKFLOW_UID
value: 29ed06b1-9745-47f1-ba1c-9475afca6900
- name: ARGO_CONTAINER_NAME
value: main
- name: ARGO_TEMPLATE
value: '{"name":"build-template","inputs":{},"outputs":{},"metadata":{},"resource":{"action":"create","manifest":"\"apiVersion\":
\"argoproj.io/v1/alpha1\"\n\"kind\": \"Workflow\"\n\"metadata\":\n \"generateName\":
\"build-\"\n \"namespace\": \"cicd\"\n\"spec\":\n \"arguments\":\n \"parameters\":\n -
\"name\": \"git_ref\"\n \"value\": \"main\"\n \"workflowTemplateRef\":\n \"name\":
\"build-implentio-api\"\n","successCondition":"status.phase == Succeeded","failureCondition":"status.phase
== (Failed, Error)"},"archiveLocation":{"archiveLogs":true,"s3":{"endpoint":"s3.amazonaws.com","bucket":"argo-38fca6aa631185d1","region":"us-west-2","key":"bpd-implentio-api-lls4g/bpd-implentio-api-lls4g-build-template-3239905087"}}}'
- name: ARGO_NODE_ID
value: bpd-implentio-api-lls4g-3239905087
- name: ARGO_INCLUDE_SCRIPT_OUTPUT
value: "false"
- name: ARGO_DEADLINE
value: "2024-08-15T22:14:47Z"
- name: ARGO_PROGRESS_FILE
value: /var/run/argo/progress
- name: ARGO_PROGRESS_PATCH_TICK_DURATION
value: 1m0s
- name: ARGO_PROGRESS_FILE_TICK_DURATION
value: 3s
- name: AWS_STS_REGIONAL_ENDPOINTS
value: regional
- name: AWS_DEFAULT_REGION
value: us-west-2
- name: AWS_REGION
value: us-west-2
- name: AWS_ROLE_ARN
value: arn:aws:iam::730335560480:role/bpd-implentio-api-6625c2ee8a8a0054-20240814213047385600000003
- name: AWS_WEB_IDENTITY_TOKEN_FILE
value: /var/run/secrets/eks.amazonaws.com/serviceaccount/token
image: 730335560480.dkr.ecr.us-west-2.amazonaws.com/quay/argoproj/argoexec:v3.5.5
imagePullPolicy: Always
name: main
resources:
limits:
memory: 70Mi
requests:
cpu: 10m
memory: 50Mi
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 8737
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /tmp
name: tmp-dir-argo
subPath: "0"
- mountPath: /var/run/argo
name: var-run-argo
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: kube-api-access-m2rb7
readOnly: true
- mountPath: /var/run/secrets/eks.amazonaws.com/serviceaccount
name: aws-iam-token
readOnly: true
dnsPolicy: ClusterFirst
enableServiceLinks: true
initContainers:
- args:
- --incoming-proxy-port
- "4143"
- --outgoing-proxy-port
- "4140"
- --proxy-uid
- "2102"
- --inbound-ports-to-ignore
- 4190,4191,4567,4568
- --outbound-ports-to-ignore
- 4567,4568
- --log-format
- json
- --log-level
- warn
env:
- name: AWS_STS_REGIONAL_ENDPOINTS
value: regional
- name: AWS_DEFAULT_REGION
value: us-west-2
- name: AWS_REGION
value: us-west-2
- name: AWS_ROLE_ARN
value: arn:aws:iam::730335560480:role/bpd-implentio-api-6625c2ee8a8a0054-20240814213047385600000003
- name: AWS_WEB_IDENTITY_TOKEN_FILE
value: /var/run/secrets/eks.amazonaws.com/serviceaccount/token
image: 730335560480.dkr.ecr.us-west-2.amazonaws.com/github/linkerd/proxy-init:v2.4.0
imagePullPolicy: IfNotPresent
name: linkerd-init
resources:
limits:
cpu: 100m
memory: 10Mi
requests:
cpu: 10m
memory: 10Mi
securityContext:
allowPrivilegeEscalation: false
capabilities:
add:
- NET_ADMIN
- NET_RAW
drop:
- ALL
privileged: false
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 65534
seccompProfile:
type: RuntimeDefault
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: FallbackToLogsOnError
volumeMounts:
- mountPath: /run
name: linkerd-proxy-init-xtables-lock
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: kube-api-access-m2rb7
readOnly: true
- mountPath: /var/run/secrets/eks.amazonaws.com/serviceaccount
name: aws-iam-token
readOnly: true
- env:
- name: _pod_name
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: _pod_ns
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
- name: _pod_nodeName
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
- name: LINKERD2_PROXY_LOG
value: warn,linkerd=warn,linkerd2_proxy=warn
- name: LINKERD2_PROXY_LOG_FORMAT
value: json
- name: LINKERD2_PROXY_DESTINATION_SVC_ADDR
value: linkerd-dst-headless.linkerd.svc.cluster.local.:8086
- name: LINKERD2_PROXY_DESTINATION_PROFILE_NETWORKS
value: 10.0.0.0/8,100.64.0.0/10,172.16.0.0/12,192.168.0.0/16,fd00::/8
- name: LINKERD2_PROXY_POLICY_SVC_ADDR
value: linkerd-policy.linkerd.svc.cluster.local.:8090
- name: LINKERD2_PROXY_POLICY_WORKLOAD
value: |
{"ns":"$(_pod_ns)", "pod":"$(_pod_name)"}
- name: LINKERD2_PROXY_INBOUND_DEFAULT_POLICY
value: all-unauthenticated
- name: LINKERD2_PROXY_POLICY_CLUSTER_NETWORKS
value: 10.0.0.0/8,100.64.0.0/10,172.16.0.0/12,192.168.0.0/16,fd00::/8
- name: LINKERD2_PROXY_CONTROL_STREAM_INITIAL_TIMEOUT
value: 3s
- name: LINKERD2_PROXY_CONTROL_STREAM_IDLE_TIMEOUT
value: 5m
- name: LINKERD2_PROXY_CONTROL_STREAM_LIFETIME
value: 1h
- name: LINKERD2_PROXY_INBOUND_CONNECT_TIMEOUT
value: 100ms
- name: LINKERD2_PROXY_OUTBOUND_CONNECT_TIMEOUT
value: 1000ms
- name: LINKERD2_PROXY_OUTBOUND_DISCOVERY_IDLE_TIMEOUT
value: 5s
- name: LINKERD2_PROXY_INBOUND_DISCOVERY_IDLE_TIMEOUT
value: 90s
- name: LINKERD2_PROXY_CONTROL_LISTEN_ADDR
value: '[::]:4190'
- name: LINKERD2_PROXY_ADMIN_LISTEN_ADDR
value: '[::]:4191'
- name: LINKERD2_PROXY_OUTBOUND_LISTEN_ADDR
value: 127.0.0.1:4140
- name: LINKERD2_PROXY_OUTBOUND_LISTEN_ADDRS
value: 127.0.0.1:4140,[::1]:4140
- name: LINKERD2_PROXY_INBOUND_LISTEN_ADDR
value: '[::]:4143'
- name: LINKERD2_PROXY_INBOUND_IPS
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: status.podIPs
- name: LINKERD2_PROXY_INBOUND_PORTS
- name: LINKERD2_PROXY_DESTINATION_PROFILE_SUFFIXES
value: svc.cluster.local.
- name: LINKERD2_PROXY_INBOUND_ACCEPT_KEEPALIVE
value: 10000ms
- name: LINKERD2_PROXY_OUTBOUND_CONNECT_KEEPALIVE
value: 10000ms
- name: LINKERD2_PROXY_INBOUND_SERVER_HTTP2_KEEP_ALIVE_INTERVAL
value: 10s
- name: LINKERD2_PROXY_INBOUND_SERVER_HTTP2_KEEP_ALIVE_TIMEOUT
value: 3s
- name: LINKERD2_PROXY_OUTBOUND_SERVER_HTTP2_KEEP_ALIVE_INTERVAL
value: 10s
- name: LINKERD2_PROXY_OUTBOUND_SERVER_HTTP2_KEEP_ALIVE_TIMEOUT
value: 3s
- name: LINKERD2_PROXY_INBOUND_PORTS_DISABLE_PROTOCOL_DETECTION
value: 25,587,3306,4444,5432,6379,9300,11211
- name: LINKERD2_PROXY_DESTINATION_CONTEXT
value: |
{"ns":"$(_pod_ns)", "nodeName":"$(_pod_nodeName)", "pod":"$(_pod_name)"}
- name: _pod_sa
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.serviceAccountName
- name: _l5d_ns
value: linkerd
- name: _l5d_trustdomain
value: cluster.local
- name: LINKERD2_PROXY_IDENTITY_DIR
value: /var/run/linkerd/identity/end-entity
- name: LINKERD2_PROXY_IDENTITY_TRUST_ANCHORS
value: |
-----BEGIN CERTIFICATE-----
MIIC2TCCAn+gAwIBAgIUV6KYl9l8OAcOtR3R4pJzMVy3WeUwCgYIKoZIzj0EAwIw
ZTESMBAGA1UEChMJcGFuZmFjdHVtMRQwEgYDVQQLEwtlbmdpbmVlcmluZzE5MDcG
A1UEAxMwaHR0cDovL3ZhdWx0LWFjdGl2ZS52YXVsdC5zdmMuY2x1c3Rlci5sb2Nh
bDo4MjAwMB4XDTI0MDYyNTE3MTQ0NFoXDTM0MDYyMzE3MTUxNFowZTESMBAGA1UE
ChMJcGFuZmFjdHVtMRQwEgYDVQQLEwtlbmdpbmVlcmluZzE5MDcGA1UEAxMwaHR0
cDovL3ZhdWx0LWFjdGl2ZS52YXVsdC5zdmMuY2x1c3Rlci5sb2NhbDo4MjAwMFkw
EwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAE5JSOnh7YRHIFZf6PdIaNAtMf7sd+0C3z
HFoehYQMl1uI65ZEsycTXDwL4uNgV9zICVxvPioQ3YRnum5uImufOKOCAQswggEH
MA4GA1UdDwEB/wQEAwIBBjAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBRQSfHM
PdWZGIUPAhO3ha3aCi16vDAfBgNVHSMEGDAWgBRQSfHMPdWZGIUPAhO3ha3aCi16
vDBWBggrBgEFBQcBAQRKMEgwRgYIKwYBBQUHMAKGOmh0dHA6Ly92YXVsdC1hY3Rp
dmUudmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWw6ODIwMC92MS9wa2kvY2EwTAYDVR0f
BEUwQzBBoD+gPYY7aHR0cDovL3ZhdWx0LWFjdGl2ZS52YXVsdC5zdmMuY2x1c3Rl
ci5sb2NhbDo4MjAwL3YxL3BraS9jcmwwCgYIKoZIzj0EAwIDSAAwRQIgSDYliFwF
uYdW5q/nForHpiyl8cmedJFleBHlSQtTgcsCIQCFEuJtKitF6+80hO8iTHX/m4WU
vJuGjseNzDCFws96bQ==
-----END CERTIFICATE-----
- name: LINKERD2_PROXY_IDENTITY_TOKEN_FILE
value: /var/run/secrets/tokens/linkerd-identity-token
- name: LINKERD2_PROXY_IDENTITY_SVC_ADDR
value: linkerd-identity-headless.linkerd.svc.cluster.local.:8080
- name: LINKERD2_PROXY_IDENTITY_LOCAL_NAME
value: $(_pod_sa).$(_pod_ns).serviceaccount.identity.linkerd.cluster.local
- name: LINKERD2_PROXY_IDENTITY_SVC_NAME
value: linkerd-identity.linkerd.serviceaccount.identity.linkerd.cluster.local
- name: LINKERD2_PROXY_DESTINATION_SVC_NAME
value: linkerd-destination.linkerd.serviceaccount.identity.linkerd.cluster.local
- name: LINKERD2_PROXY_POLICY_SVC_NAME
value: linkerd-destination.linkerd.serviceaccount.identity.linkerd.cluster.local
- name: AWS_STS_REGIONAL_ENDPOINTS
value: regional
- name: AWS_DEFAULT_REGION
value: us-west-2
- name: AWS_REGION
value: us-west-2
- name: AWS_ROLE_ARN
value: arn:aws:iam::730335560480:role/bpd-implentio-api-6625c2ee8a8a0054-20240814213047385600000003
- name: AWS_WEB_IDENTITY_TOKEN_FILE
value: /var/run/secrets/eks.amazonaws.com/serviceaccount/token
image: 730335560480.dkr.ecr.us-west-2.amazonaws.com/github/linkerd/proxy:edge-24.5.1
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: /live
port: 4191
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
name: linkerd-proxy
ports:
- containerPort: 4143
name: linkerd-proxy
protocol: TCP
- containerPort: 4191
name: linkerd-admin
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /ready
port: 4191
scheme: HTTP
initialDelaySeconds: 2
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
resources:
limits:
memory: 200Mi
requests:
memory: 10Mi
restartPolicy: Always
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 2102
seccompProfile:
type: RuntimeDefault
startupProbe:
failureThreshold: 120
httpGet:
path: /ready
port: 4191
scheme: HTTP
periodSeconds: 1
successThreshold: 1
timeoutSeconds: 1
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: FallbackToLogsOnError
volumeMounts:
- mountPath: /var/run/linkerd/identity/end-entity
name: linkerd-identity-end-entity
- mountPath: /var/run/secrets/tokens
name: linkerd-identity-token
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: kube-api-access-m2rb7
readOnly: true
- mountPath: /var/run/secrets/eks.amazonaws.com/serviceaccount
name: aws-iam-token
readOnly: true
- command:
- argoexec
- init
- --loglevel
- info
- --log-format
- json
env:
- name: ARGO_POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: ARGO_POD_UID
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.uid
- name: GODEBUG
value: x509ignoreCN=0
- name: ARGO_WORKFLOW_NAME
value: bpd-implentio-api-lls4g
- name: ARGO_WORKFLOW_UID
value: 29ed06b1-9745-47f1-ba1c-9475afca6900
- name: ARGO_CONTAINER_NAME
value: init
- name: ARGO_TEMPLATE
value: '{"name":"build-template","inputs":{},"outputs":{},"metadata":{},"resource":{"action":"create","manifest":"\"apiVersion\":
\"argoproj.io/v1/alpha1\"\n\"kind\": \"Workflow\"\n\"metadata\":\n \"generateName\":
\"build-\"\n \"namespace\": \"cicd\"\n\"spec\":\n \"arguments\":\n \"parameters\":\n -
\"name\": \"git_ref\"\n \"value\": \"main\"\n \"workflowTemplateRef\":\n \"name\":
\"build-implentio-api\"\n","successCondition":"status.phase == Succeeded","failureCondition":"status.phase
== (Failed, Error)"},"archiveLocation":{"archiveLogs":true,"s3":{"endpoint":"s3.amazonaws.com","bucket":"argo-38fca6aa631185d1","region":"us-west-2","key":"bpd-implentio-api-lls4g/bpd-implentio-api-lls4g-build-template-3239905087"}}}'
- name: ARGO_NODE_ID
value: bpd-implentio-api-lls4g-3239905087
- name: ARGO_INCLUDE_SCRIPT_OUTPUT
value: "false"
- name: ARGO_DEADLINE
value: "2024-08-15T22:14:47Z"
- name: ARGO_PROGRESS_FILE
value: /var/run/argo/progress
- name: ARGO_PROGRESS_PATCH_TICK_DURATION
value: 1m0s
- name: ARGO_PROGRESS_FILE_TICK_DURATION
value: 3s
- name: AWS_STS_REGIONAL_ENDPOINTS
value: regional
- name: AWS_DEFAULT_REGION
value: us-west-2
- name: AWS_REGION
value: us-west-2
- name: AWS_ROLE_ARN
value: arn:aws:iam::730335560480:role/bpd-implentio-api-6625c2ee8a8a0054-20240814213047385600000003
- name: AWS_WEB_IDENTITY_TOKEN_FILE
value: /var/run/secrets/eks.amazonaws.com/serviceaccount/token
image: 730335560480.dkr.ecr.us-west-2.amazonaws.com/quay/argoproj/argoexec:v3.5.5
imagePullPolicy: Always
name: init
resources:
limits:
memory: 70Mi
requests:
cpu: 10m
memory: 50Mi
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
runAsNonRoot: true
runAsUser: 8737
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/run/argo
name: var-run-argo
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: kube-api-access-m2rb7
readOnly: true
- mountPath: /var/run/secrets/eks.amazonaws.com/serviceaccount
name: aws-iam-token
readOnly: true
preemptionPolicy: PreemptLowerPriority
priority: 0
priorityClassName: default
restartPolicy: Never
schedulerName: panfactum
securityContext:
fsGroup: 1000
fsGroupChangePolicy: OnRootMismatch
runAsGroup: 1000
runAsNonRoot: false
runAsUser: 1000
serviceAccount: bpd-implentio-api-6625c2ee8a8a0054
serviceAccountName: bpd-implentio-api-6625c2ee8a8a0054
terminationGracePeriodSeconds: 30
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
volumes:
- name: aws-iam-token
projected:
defaultMode: 420
sources:
- serviceAccountToken:
audience: sts.amazonaws.com
expirationSeconds: 86400
path: token
- emptyDir: {}
name: var-run-argo
- emptyDir: {}
name: tmp-dir-argo
- name: kube-api-access-m2rb7
projected:
defaultMode: 420
sources:
- serviceAccountToken:
expirationSeconds: 3607
path: token
- configMap:
items:
- key: ca.crt
path: ca.crt
name: kube-root-ca.crt
- downwardAPI:
items:
- fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
path: namespace
- emptyDir: {}
name: linkerd-proxy-init-xtables-lock
- emptyDir:
medium: Memory
name: linkerd-identity-end-entity
- name: linkerd-identity-token
projected:
defaultMode: 420
sources:
- serviceAccountToken:
audience: identity.l5d.io
expirationSeconds: 86400
path: linkerd-identity-token
status:
conditions:
- lastProbeTime: null
lastTransitionTime: "2024-08-14T22:14:47Z"
message: '0/6 nodes are available: 2 node(s) had untolerated taint {arm64: true},
2 node(s) had untolerated taint {burstable: true}, 2 node(s) had untolerated
taint {spot: true}. preemption: 0/6 nodes are available: 6 Preemption is not
helpful for scheduling.'
reason: Unschedulable
status: "False"
type: PodScheduled
phase: Pending
qosClass: Burstable
@fullykubed Upon describing I got
Name: bpd-implentio-api-lls4g-build-template-3239905087
Namespace: cicd
Priority: 0
Priority Class Name: default
Service Account: bpd-implentio-api-6625c2ee8a8a0054
Node: ip-10-0-85-243.us-west-2.compute.internal/10.0.85.243
Start Time: Wed, 14 Aug 2024 17:15:28 -0500
Labels: id=bpd-implentio-api-8b4ba5b2878d6916
linkerd.io/control-plane-ns=linkerd
linkerd.io/workload-ns=cicd
panfactum.com/environment=development
panfactum.com/local=false
panfactum.com/module=wf_spec
panfactum.com/prevent-lifetime-eviction=true
panfactum.com/region=us-west-2
panfactum.com/root-module=implentio_cicd
panfactum.com/scheduler=true
panfactum.com/stack-commit=d60fefd99d4be7a50c582ee25dfcb00976762f4c
panfactum.com/stack-version=edge.24-07-08
panfactum.com/workload=bpd-implentio-api
workflows.argoproj.io/completed=false
workflows.argoproj.io/workflow=bpd-implentio-api-lls4g
Annotations: karpenter.sh/do-not-disrupt: true
kubectl.kubernetes.io/default-container: main
linkerd.io/created-by: linkerd/proxy-injector edge-24.5.1
linkerd.io/inject: enabled
linkerd.io/proxy-version: edge-24.5.1
linkerd.io/trust-root-sha256: 3b05529c8b2d7e76636622d81334fa3f61af673234377910b8fbef4b53103b65
workflows.argoproj.io/node-id: bpd-implentio-api-lls4g-3239905087
workflows.argoproj.io/node-name: bpd-implentio-api-lls4g(0).build(0)
Status: Failed
IP: 10.0.104.144
IPs:
IP: 10.0.104.144
Controlled By: Workflow/bpd-implentio-api-lls4g
Init Containers:
linkerd-init:
Container ID: containerd://84944cad37b81b9be0c5f26985554fc28f5119d3615feea9fbbefe6a071027e5
Image: 730335560480.dkr.ecr.us-west-2.amazonaws.com/github/linkerd/proxy-init:v2.4.0
Image ID: 730335560480.dkr.ecr.us-west-2.amazonaws.com/github/linkerd/proxy-init@sha256:5bd804267a4e0b585c5e6e1e1cbf5d91887ed73be84e35fe784df2331b6e9c61
Port: <none>
Host Port: <none>
SeccompProfile: RuntimeDefault
Args:
--incoming-proxy-port
4143
--outgoing-proxy-port
4140
--proxy-uid
2102
--inbound-ports-to-ignore
4190,4191,4567,4568
--outbound-ports-to-ignore
4567,4568
--log-format
json
--log-level
warn
State: Terminated
Reason: Completed
Exit Code: 0
Started: Wed, 14 Aug 2024 17:15:34 -0500
Finished: Wed, 14 Aug 2024 17:15:34 -0500
Ready: True
Restart Count: 0
Limits:
cpu: 100m
memory: 10Mi
Requests:
cpu: 10m
memory: 10Mi
Environment:
AWS_STS_REGIONAL_ENDPOINTS: regional
AWS_DEFAULT_REGION: us-west-2
AWS_REGION: us-west-2
AWS_ROLE_ARN: arn:aws:iam::730335560480:role/bpd-implentio-api-6625c2ee8a8a0054-20240814213047385600000003
AWS_WEB_IDENTITY_TOKEN_FILE: /var/run/secrets/eks.amazonaws.com/serviceaccount/token
Mounts:
/run from linkerd-proxy-init-xtables-lock (rw)
/var/run/secrets/eks.amazonaws.com/serviceaccount from aws-iam-token (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-m2rb7 (ro)
linkerd-proxy:
Container ID: containerd://17afb84cdc2786f3ecefc9928fc25bb0f4c5f5e045c46893b0dea1abf8385206
Image: 730335560480.dkr.ecr.us-west-2.amazonaws.com/github/linkerd/proxy:edge-24.5.1
Image ID: 730335560480.dkr.ecr.us-west-2.amazonaws.com/github/linkerd/proxy@sha256:6ecc3ede913be8014a3f93c34bf6a2e6fbd1f4009f3d39d134b925d609529402
Ports: 4143/TCP, 4191/TCP
Host Ports: 0/TCP, 0/TCP
SeccompProfile: RuntimeDefault
State: Terminated
Reason: Completed
Exit Code: 0
Started: Wed, 14 Aug 2024 17:15:38 -0500
Finished: Wed, 14 Aug 2024 17:15:57 -0500
Ready: False
Restart Count: 0
Limits:
memory: 200Mi
Requests:
memory: 10Mi
Liveness: http-get http://:4191/live delay=10s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:4191/ready delay=2s timeout=1s period=10s #success=1 #failure=3
Startup: http-get http://:4191/ready delay=0s timeout=1s period=1s #success=1 #failure=120
Environment:
_pod_name: bpd-implentio-api-lls4g-build-template-3239905087 (v1:metadata.name)
_pod_ns: cicd (v1:metadata.namespace)
_pod_nodeName: (v1:spec.nodeName)
LINKERD2_PROXY_LOG: warn,linkerd=warn,linkerd2_proxy=warn
LINKERD2_PROXY_LOG_FORMAT: json
LINKERD2_PROXY_DESTINATION_SVC_ADDR: linkerd-dst-headless.linkerd.svc.cluster.local.:8086
LINKERD2_PROXY_DESTINATION_PROFILE_NETWORKS: 10.0.0.0/8,100.64.0.0/10,172.16.0.0/12,192.168.0.0/16,fd00::/8
LINKERD2_PROXY_POLICY_SVC_ADDR: linkerd-policy.linkerd.svc.cluster.local.:8090
LINKERD2_PROXY_POLICY_WORKLOAD: {"ns":"$(_pod_ns)", "pod":"$(_pod_name)"}
LINKERD2_PROXY_INBOUND_DEFAULT_POLICY: all-unauthenticated
LINKERD2_PROXY_POLICY_CLUSTER_NETWORKS: 10.0.0.0/8,100.64.0.0/10,172.16.0.0/12,192.168.0.0/16,fd00::/8
LINKERD2_PROXY_CONTROL_STREAM_INITIAL_TIMEOUT: 3s
LINKERD2_PROXY_CONTROL_STREAM_IDLE_TIMEOUT: 5m
LINKERD2_PROXY_CONTROL_STREAM_LIFETIME: 1h
LINKERD2_PROXY_INBOUND_CONNECT_TIMEOUT: 100ms
LINKERD2_PROXY_OUTBOUND_CONNECT_TIMEOUT: 1000ms
LINKERD2_PROXY_OUTBOUND_DISCOVERY_IDLE_TIMEOUT: 5s
LINKERD2_PROXY_INBOUND_DISCOVERY_IDLE_TIMEOUT: 90s
LINKERD2_PROXY_CONTROL_LISTEN_ADDR: [::]:4190
LINKERD2_PROXY_ADMIN_LISTEN_ADDR: [::]:4191
LINKERD2_PROXY_OUTBOUND_LISTEN_ADDR: 127.0.0.1:4140
LINKERD2_PROXY_OUTBOUND_LISTEN_ADDRS: 127.0.0.1:4140,[::1]:4140
LINKERD2_PROXY_INBOUND_LISTEN_ADDR: [::]:4143
LINKERD2_PROXY_INBOUND_IPS: (v1:status.podIPs)
LINKERD2_PROXY_INBOUND_PORTS:
LINKERD2_PROXY_DESTINATION_PROFILE_SUFFIXES: svc.cluster.local.
LINKERD2_PROXY_INBOUND_ACCEPT_KEEPALIVE: 10000ms
LINKERD2_PROXY_OUTBOUND_CONNECT_KEEPALIVE: 10000ms
LINKERD2_PROXY_INBOUND_SERVER_HTTP2_KEEP_ALIVE_INTERVAL: 10s
LINKERD2_PROXY_INBOUND_SERVER_HTTP2_KEEP_ALIVE_TIMEOUT: 3s
LINKERD2_PROXY_OUTBOUND_SERVER_HTTP2_KEEP_ALIVE_INTERVAL: 10s
LINKERD2_PROXY_OUTBOUND_SERVER_HTTP2_KEEP_ALIVE_TIMEOUT: 3s
LINKERD2_PROXY_INBOUND_PORTS_DISABLE_PROTOCOL_DETECTION: 25,587,3306,4444,5432,6379,9300,11211
LINKERD2_PROXY_DESTINATION_CONTEXT: {"ns":"$(_pod_ns)", "nodeName":"$(_pod_nodeName)", "pod":"$(_pod_name)"}
_pod_sa: (v1:spec.serviceAccountName)
_l5d_ns: linkerd
_l5d_trustdomain: cluster.local
LINKERD2_PROXY_IDENTITY_DIR: /var/run/linkerd/identity/end-entity
LINKERD2_PROXY_IDENTITY_TRUST_ANCHORS: -----BEGIN CERTIFICATE-----
MIIC2TCCAn+gAwIBAgIUV6KYl9l8OAcOtR3R4pJzMVy3WeUwCgYIKoZIzj0EAwIw
ZTESMBAGA1UEChMJcGFuZmFjdHVtMRQwEgYDVQQLEwtlbmdpbmVlcmluZzE5MDcG
A1UEAxMwaHR0cDovL3ZhdWx0LWFjdGl2ZS52YXVsdC5zdmMuY2x1c3Rlci5sb2Nh
bDo4MjAwMB4XDTI0MDYyNTE3MTQ0NFoXDTM0MDYyMzE3MTUxNFowZTESMBAGA1UE
ChMJcGFuZmFjdHVtMRQwEgYDVQQLEwtlbmdpbmVlcmluZzE5MDcGA1UEAxMwaHR0
cDovL3ZhdWx0LWFjdGl2ZS52YXVsdC5zdmMuY2x1c3Rlci5sb2NhbDo4MjAwMFkw
EwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAE5JSOnh7YRHIFZf6PdIaNAtMf7sd+0C3z
HFoehYQMl1uI65ZEsycTXDwL4uNgV9zICVxvPioQ3YRnum5uImufOKOCAQswggEH
MA4GA1UdDwEB/wQEAwIBBjAPBgNVHRMBAf8EBTADAQH/MB0GA1UdDgQWBBRQSfHM
PdWZGIUPAhO3ha3aCi16vDAfBgNVHSMEGDAWgBRQSfHMPdWZGIUPAhO3ha3aCi16
vDBWBggrBgEFBQcBAQRKMEgwRgYIKwYBBQUHMAKGOmh0dHA6Ly92YXVsdC1hY3Rp
dmUudmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWw6ODIwMC92MS9wa2kvY2EwTAYDVR0f
BEUwQzBBoD+gPYY7aHR0cDovL3ZhdWx0LWFjdGl2ZS52YXVsdC5zdmMuY2x1c3Rl
ci5sb2NhbDo4MjAwL3YxL3BraS9jcmwwCgYIKoZIzj0EAwIDSAAwRQIgSDYliFwF
uYdW5q/nForHpiyl8cmedJFleBHlSQtTgcsCIQCFEuJtKitF6+80hO8iTHX/m4WU
vJuGjseNzDCFws96bQ==
-----END CERTIFICATE-----
LINKERD2_PROXY_IDENTITY_TOKEN_FILE: /var/run/secrets/tokens/linkerd-identity-token
LINKERD2_PROXY_IDENTITY_SVC_ADDR: linkerd-identity-headless.linkerd.svc.cluster.local.:8080
LINKERD2_PROXY_IDENTITY_LOCAL_NAME: $(_pod_sa).$(_pod_ns).serviceaccount.identity.linkerd.cluster.local
LINKERD2_PROXY_IDENTITY_SVC_NAME: linkerd-identity.linkerd.serviceaccount.identity.linkerd.cluster.local
LINKERD2_PROXY_DESTINATION_SVC_NAME: linkerd-destination.linkerd.serviceaccount.identity.linkerd.cluster.local
LINKERD2_PROXY_POLICY_SVC_NAME: linkerd-destination.linkerd.serviceaccount.identity.linkerd.cluster.local
AWS_STS_REGIONAL_ENDPOINTS: regional
AWS_DEFAULT_REGION: us-west-2
AWS_REGION: us-west-2
AWS_ROLE_ARN: arn:aws:iam::730335560480:role/bpd-implentio-api-6625c2ee8a8a0054-20240814213047385600000003
AWS_WEB_IDENTITY_TOKEN_FILE: /var/run/secrets/eks.amazonaws.com/serviceaccount/token
Mounts:
/var/run/linkerd/identity/end-entity from linkerd-identity-end-entity (rw)
/var/run/secrets/eks.amazonaws.com/serviceaccount from aws-iam-token (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-m2rb7 (ro)
/var/run/secrets/tokens from linkerd-identity-token (rw)
init:
Container ID: containerd://320e8f0e93100a661bb8d87f233e4cf0c21f6d9c68b4417f73a632d33bafb222
Image: 730335560480.dkr.ecr.us-west-2.amazonaws.com/quay/argoproj/argoexec:v3.5.5
Image ID: 730335560480.dkr.ecr.us-west-2.amazonaws.com/quay/argoproj/argoexec@sha256:32a568bd1ecb2691a61aa4a646d90b08fe5c4606a2d5cbf264565b1ced98f12b
Port: <none>
Host Port: <none>
Command:
argoexec
init
--loglevel
info
--log-format
json
State: Terminated
Reason: Completed
Exit Code: 0
Started: Wed, 14 Aug 2024 17:15:47 -0500
Finished: Wed, 14 Aug 2024 17:15:47 -0500
Ready: True
Restart Count: 0
Limits:
memory: 70Mi
Requests:
cpu: 10m
memory: 50Mi
Environment:
ARGO_POD_NAME: bpd-implentio-api-lls4g-build-template-3239905087 (v1:metadata.name)
ARGO_POD_UID: (v1:metadata.uid)
GODEBUG: x509ignoreCN=0
ARGO_WORKFLOW_NAME: bpd-implentio-api-lls4g
ARGO_WORKFLOW_UID: 29ed06b1-9745-47f1-ba1c-9475afca6900
ARGO_CONTAINER_NAME: init
ARGO_TEMPLATE: {"name":"build-template","inputs":{},"outputs":{},"metadata":{},"resource":{"action":"create","manifest":"\"apiVersion\": \"argoproj.io/v1/alpha1\"\n\"kind\": \"Workflow\"\n\"metadata\":\n \"generateName\": \"build-\"\n \"namespace\": \"cicd\"\n\"spec\":\n \"arguments\":\n \"parameters\":\n - \"name\": \"git_ref\"\n \"value\": \"main\"\n \"workflowTemplateRef\":\n \"name\": \"build-implentio-api\"\n","successCondition":"status.phase == Succeeded","failureCondition":"status.phase == (Failed, Error)"},"archiveLocation":{"archiveLogs":true,"s3":{"endpoint":"s3.amazonaws.com","bucket":"argo-38fca6aa631185d1","region":"us-west-2","key":"bpd-implentio-api-lls4g/bpd-implentio-api-lls4g-build-template-3239905087"}}}
ARGO_NODE_ID: bpd-implentio-api-lls4g-3239905087
ARGO_INCLUDE_SCRIPT_OUTPUT: false
ARGO_DEADLINE: 2024-08-15T22:14:47Z
ARGO_PROGRESS_FILE: /var/run/argo/progress
ARGO_PROGRESS_PATCH_TICK_DURATION: 1m0s
ARGO_PROGRESS_FILE_TICK_DURATION: 3s
AWS_STS_REGIONAL_ENDPOINTS: regional
AWS_DEFAULT_REGION: us-west-2
AWS_REGION: us-west-2
AWS_ROLE_ARN: arn:aws:iam::730335560480:role/bpd-implentio-api-6625c2ee8a8a0054-20240814213047385600000003
AWS_WEB_IDENTITY_TOKEN_FILE: /var/run/secrets/eks.amazonaws.com/serviceaccount/token
Mounts:
/var/run/argo from var-run-argo (rw)
/var/run/secrets/eks.amazonaws.com/serviceaccount from aws-iam-token (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-m2rb7 (ro)
Containers:
main:
Container ID: containerd://5435416b714643548504fbe5fa3a0d445f391ba4a781f0f2e1dd89543efbc46c
Image: 730335560480.dkr.ecr.us-west-2.amazonaws.com/quay/argoproj/argoexec:v3.5.5
Image ID: 730335560480.dkr.ecr.us-west-2.amazonaws.com/quay/argoproj/argoexec@sha256:32a568bd1ecb2691a61aa4a646d90b08fe5c4606a2d5cbf264565b1ced98f12b
Port: <none>
Host Port: <none>
Command:
/var/run/argo/argoexec
emissary
--loglevel
info
--log-format
json
--
argoexec
resource
create
State: Terminated
Reason: Error
Exit Code: 137
Started: Wed, 14 Aug 2024 17:15:51 -0500
Finished: Wed, 14 Aug 2024 17:15:55 -0500
Ready: False
Restart Count: 0
Limits:
memory: 70Mi
Requests:
cpu: 10m
memory: 50Mi
Environment:
ARGO_POD_NAME: bpd-implentio-api-lls4g-build-template-3239905087 (v1:metadata.name)
ARGO_POD_UID: (v1:metadata.uid)
GODEBUG: x509ignoreCN=0
ARGO_WORKFLOW_NAME: bpd-implentio-api-lls4g
ARGO_WORKFLOW_UID: 29ed06b1-9745-47f1-ba1c-9475afca6900
ARGO_CONTAINER_NAME: main
ARGO_TEMPLATE: {"name":"build-template","inputs":{},"outputs":{},"metadata":{},"resource":{"action":"create","manifest":"\"apiVersion\": \"argoproj.io/v1/alpha1\"\n\"kind\": \"Workflow\"\n\"metadata\":\n \"generateName\": \"build-\"\n \"namespace\": \"cicd\"\n\"spec\":\n \"arguments\":\n \"parameters\":\n - \"name\": \"git_ref\"\n \"value\": \"main\"\n \"workflowTemplateRef\":\n \"name\": \"build-implentio-api\"\n","successCondition":"status.phase == Succeeded","failureCondition":"status.phase == (Failed, Error)"},"archiveLocation":{"archiveLogs":true,"s3":{"endpoint":"s3.amazonaws.com","bucket":"argo-38fca6aa631185d1","region":"us-west-2","key":"bpd-implentio-api-lls4g/bpd-implentio-api-lls4g-build-template-3239905087"}}}
ARGO_NODE_ID: bpd-implentio-api-lls4g-3239905087
ARGO_INCLUDE_SCRIPT_OUTPUT: false
ARGO_DEADLINE: 2024-08-15T22:14:47Z
ARGO_PROGRESS_FILE: /var/run/argo/progress
ARGO_PROGRESS_PATCH_TICK_DURATION: 1m0s
ARGO_PROGRESS_FILE_TICK_DURATION: 3s
AWS_STS_REGIONAL_ENDPOINTS: regional
AWS_DEFAULT_REGION: us-west-2
AWS_REGION: us-west-2
AWS_ROLE_ARN: arn:aws:iam::730335560480:role/bpd-implentio-api-6625c2ee8a8a0054-20240814213047385600000003
AWS_WEB_IDENTITY_TOKEN_FILE: /var/run/secrets/eks.amazonaws.com/serviceaccount/token
Mounts:
/tmp from tmp-dir-argo (rw,path="0")
/var/run/argo from var-run-argo (rw)
/var/run/secrets/eks.amazonaws.com/serviceaccount from aws-iam-token (ro)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-m2rb7 (ro)
Conditions:
Type Status
PodReadyToStartContainers False
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
aws-iam-token:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 86400
var-run-argo:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
tmp-dir-argo:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
kube-api-access-m2rb7:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
linkerd-proxy-init-xtables-lock:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
linkerd-identity-end-entity:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium: Memory
SizeLimit: <unset>
linkerd-identity-token:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 86400
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 80s panfactum 0/6 nodes are available: 2 node(s) had untolerated taint {arm64: true}, 2 node(s) had untolerated taint {burstable: true}, 2 node(s) had untolerated taint {spot: true}. preemption: 0/6 nodes are available: 6 Preemption is not helpful for scheduling.
Normal Scheduled 39s panfactum Successfully assigned cicd/bpd-implentio-api-lls4g-build-template-3239905087 to ip-10-0-85-243.us-west-2.compute.internal
Normal Nominated 79s karpenter Pod should schedule on: nodeclaim/on-demand-6lt89
Normal Pulling 35s kubelet Pulling image "730335560480.dkr.ecr.us-west-2.amazonaws.com/github/linkerd/proxy-init:v2.4.0"
Normal Pulled 34s kubelet Successfully pulled image "730335560480.dkr.ecr.us-west-2.amazonaws.com/github/linkerd/proxy-init:v2.4.0" in 1.931s (1.931s including waiting)
Normal Created 33s kubelet Created container linkerd-init
Normal Started 33s kubelet Started container linkerd-init
Normal Pulling 32s kubelet Pulling image "730335560480.dkr.ecr.us-west-2.amazonaws.com/github/linkerd/proxy:edge-24.5.1"
Normal Pulled 30s kubelet Successfully pulled image "730335560480.dkr.ecr.us-west-2.amazonaws.com/github/linkerd/proxy:edge-24.5.1" in 2.427s (2.427s including waiting)
Normal Created 30s kubelet Created container linkerd-proxy
Normal Started 29s kubelet Started container linkerd-proxy
Normal Pulling 27s kubelet Pulling image "730335560480.dkr.ecr.us-west-2.amazonaws.com/quay/argoproj/argoexec:v3.5.5"
Normal Pulled 20s kubelet Successfully pulled image "730335560480.dkr.ecr.us-west-2.amazonaws.com/quay/argoproj/argoexec:v3.5.5" in 6.619s (6.619s including waiting)
Normal Created 20s kubelet Created container init
Normal Started 20s kubelet Started container init
Normal Pulling 17s kubelet Pulling image "730335560480.dkr.ecr.us-west-2.amazonaws.com/quay/argoproj/argoexec:v3.5.5"
Normal Pulled 16s kubelet Successfully pulled image "730335560480.dkr.ecr.us-west-2.amazonaws.com/quay/argoproj/argoexec:v3.5.5" in 680ms (680ms including waiting)
Normal Created 16s kubelet Created container main
Normal Started 16s kubelet Started container main
Normal Killing 10s kubelet Stopping container linkerd-proxy
Thank you. I will take a look and should have a solution tomorrow.
@fullykubed very much appreciated!
@fullykubed if it's something difficult to replicate, it would be helpful if you have an example of a build-push + terragrunt apply for a corresponding module.
In the works!
edge.2024-08-15
should resolve the OOM issue.
Additionally, we have a guide for doing rolling updates: https://panfactum.com/docs/edge/guides/cicd/rolling-deployments.
Finally, we have simplified the recommended way to combine workflows: https://panfactum.com/docs/edge/guides/addons/workflow-engine/triggering-workflows#template-references
Prior Search
What is your question?
Hey, I've been trying to understand how to orchestrate 2 workflows by executing them sequentially but I'm finding it really hard to get it working as I keep getting OOMKilled:
This is the hcl constructing the dag of tasks. Basically all I want is just to submit those workflows already created in sequence. I figured there would a simpler API to achieve this but it doesn't seem to be the case.
https://panfactum.com/docs/edge/guides/addons/workflow-engine/triggering-workflows
What primary components of the stack does this relate to?
terraform, website
Code of Conduct