litmuschaos / litmus

Litmus helps SREs and developers practice chaos engineering in a Cloud-native way. Chaos experiments are published at the ChaosHub (https://hub.litmuschaos.io). Community notes is at https://hackmd.io/a4Zu_sH4TZGeih-xCimi3Q
https://litmuschaos.io
Apache License 2.0
4.45k stars 698 forks source link

workflow stuck on "revert-chaos" #3330

Open yogeshkk opened 3 years ago

yogeshkk commented 3 years ago

Question

I am trying to configure litmus for the first time. I have installed on one cluster and installed an agent on another. The issue is that all my workflow is stuck on "revert-chaos" phase and if I skip that workflow gets success.

I am just tying custom workflow with pod delete one.

I am on the latest 2.2.0 version


I have the below observation.

The workflow pod custom-chaos-workflow-1636442211-198396899 has a logline saying it deleted resources. main chaosengine.litmuschaos.io "pod-deletebb2zc" deleted

But then the chaos-operator-ce is trying to find the same pod and as it is deleted by workflow it says pod not found and will stuck on it will timeout and then workflows will be marked as failed.

{"level":"error","ts":1636442915.6952085,"logger":"controller-runtime.controller","msg":"Reconciler error","controller":"chaosengine-controller","request":"apps/pod-deletebb2zc”,”error":"the server could not find the requested resource","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/zapr@v0.1.1/zapr.go:128\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.4.0/pkg/internal/controller/controller.go:258\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.4.0/pkg/internal/controller/controller.go:232\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.4.0/pkg/internal/controller/controller.go:211\nk8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/go/pkg/mod/k8s.io/apimachinery@v0.0.0-20191004115801-a2eda9f80ab8/pkg/util/wait/wait.go:152\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/pkg/mod/k8s.io/apimachinery@v0.0.0-20191004115801-a2eda9f80ab8/pkg/util/wait/wait.go:153\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/pkg/mod/k8s.io/apimachinery@v0.0.0-20191004115801-a2eda9f80ab8/pkg/util/wait/wait.go:88"}

I am not sure if I am making any mistakes. below is my workflow

kind: Workflow
apiVersion: argoproj.io/v1alpha1
metadata:
  name: custom-chaos-workflow-1636442211
  namespace: apps
  creationTimestamp: null
  labels:
    cluster_id: 91d858a8-9c43-43f5-bd74-6536887d7970
    subject: custom-chaos-workflow_apps
    workflow_id: 3451efc6-a480-4c07-a9f8-94625229f21d
    workflows.argoproj.io/controller-instanceid: 91d858a8-9c43-43f5-bd74-6536887d7970
spec:
  templates:
    - name: custom-chaos
      arguments: {}
      inputs: {}
      outputs: {}
      metadata: {}
      steps:
        - - name: install-chaos-experiments
            template: install-chaos-experiments
            arguments: {}
        - - name: pod-delete
            template: pod-delete
            arguments: {}
        - - name: revert-chaos
            template: revert-chaos
            arguments: {}
    - name: install-chaos-experiments
      arguments: {}
      inputs:
        artifacts:
          - name: pod-delete
            path: /tmp/pod-delete.yaml
            raw:
              data: >
                apiVersion: litmuschaos.io/v1alpha1

                description:
                  message: |
                    Deletes a pod belonging to a deployment/statefulset/daemonset
                kind: ChaosExperiment

                metadata:
                  name: pod-delete
                  labels:
                    name: pod-delete
                    app.kubernetes.io/part-of: litmus
                    app.kubernetes.io/component: chaosexperiment
                    app.kubernetes.io/version: 2.1.1
                spec:
                  definition:
                    scope: Namespaced
                    permissions:
                      - apiGroups:
                          - ""
                          - apps
                          - apps.openshift.io
                          - argoproj.io
                          - batch
                          - litmuschaos.io
                        resources:
                          - deployments
                          - jobs
                          - pods
                          - pods/log
                          - replicationcontrollers
                          - deployments
                          - statefulsets
                          - daemonsets
                          - replicasets
                          - deploymentconfigs
                          - rollouts
                          - pods/exec
                          - events
                          - chaosengines
                          - chaosexperiments
                          - chaosresults
                        verbs:
                          - create
                          - list
                          - get
                          - patch
                          - update
                          - delete
                          - deletecollection
                    image: litmuschaos/go-runner:2.1.1
                    imagePullPolicy: Always
                    args:
                      - -c
                      - ./experiments -name pod-delete
                    command:
                      - /bin/bash
                    env:
                      - name: TOTAL_CHAOS_DURATION
                        value: "15"
                      - name: RAMP_TIME
                        value: ""
                      - name: FORCE
                        value: "true"
                      - name: CHAOS_INTERVAL
                        value: "5"
                      - name: PODS_AFFECTED_PERC
                        value: ""
                      - name: LIB
                        value: litmus
                      - name: TARGET_PODS
                        value: ""
                      - name: SEQUENCE
                        value: parallel
                    labels:
                      name: pod-delete
                      app.kubernetes.io/part-of: litmus
                      app.kubernetes.io/component: experiment-job
                      app.kubernetes.io/version: 2.1.1
      outputs: {}
      metadata: {}
      container:
        name: ""
        image: litmuschaos/k8s:latest
        command:
          - sh
          - -c
        args:
          - kubectl apply -f /tmp/pod-delete.yaml -n
            {{workflow.parameters.adminModeNamespace}} |  sleep 30
        resources: {}
    - name: pod-delete
      arguments: {}
      inputs:
        artifacts:
          - name: pod-delete
            path: /tmp/chaosengine-pod-delete.yaml
            raw:
              data: |
                apiVersion: litmuschaos.io/v1alpha1
                kind: ChaosEngine
                metadata:
                  namespace: "{{workflow.parameters.adminModeNamespace}}"
                  generateName: pod-delete
                  labels:
                    instance_id: 10ce4f8a-bd69-407d-ad6a-44d090af5544
                    context: pod-delete_apps
                    workflow_name: custom-chaos-workflow-1636442211
                spec:
                  appinfo:
                    appns: default
                    applabel: app=jenkins
                    appkind: statefulset
                  jobCleanUpPolicy: retain
                  engineState: active
                  chaosServiceAccount: litmus-admin
                  experiments:
                    - name: pod-delete
                      spec:
                        components:
                          env:
                            - name: TOTAL_CHAOS_DURATION
                              value: "30"
                            - name: CHAOS_INTERVAL
                              value: "10"
                            - name: FORCE
                              value: "false"
                            - name: PODS_AFFECTED_PERC
                              value: ""
                        probe: []
                  annotationCheck: "false"
      outputs: {}
      metadata:
        labels:
          weight: "10"
      container:
        name: ""
        image: litmuschaos/litmus-checker:latest
        args:
          - -file=/tmp/chaosengine-pod-delete.yaml
          - -saveName=/tmp/engine-name
        resources: {}
    - name: revert-chaos
      arguments: {}
      inputs: {}
      outputs: {}
      metadata: {}
      container:
        name: ""
        image: litmuschaos/k8s:latest
        command:
          - sh
          - -c
        args:
          - "kubectl delete chaosengine -l 'instance_id in
            (10ce4f8a-bd69-407d-ad6a-44d090af5544, )' -n
            {{workflow.parameters.adminModeNamespace}} "
        resources: {}
  entrypoint: custom-chaos
  arguments:
    parameters:
      - name: adminModeNamespace
        value: apps
  serviceAccountName: argo-chaos
  podGC:
    strategy: OnWorkflowCompletion
  securityContext:
    runAsUser: 1000
    runAsNonRoot: true
status:
  ? startedAt
  ? finishedAt
oumkale commented 3 years ago

Hi @yogeshkk,

Will you please share logs of workflow controller, chaos operator, and experiment?

yogeshkk commented 3 years ago

Thanks @oumkale

Below are logs

  1. chaos workflow which still running custom-chaos-workflow-1636442211-1983968995 2/2 Running 0 85m logs of wait

    time="2021-11-09T07:20:48.263Z" level=info msg="Starting Workflow Executor" version=v2.11.0
    time="2021-11-09T07:20:48.268Z" level=info msg="Creating a K8sAPI executor"
    time="2021-11-09T07:20:48.268Z" level=info msg="Executor (version: v2.11.0, build_date: 2020-09-17T22:51:06Z) initialized (pod: apps/custom-chaos-workflow-1636442211-1983968995) with template:\n{\"name\":\"revert-chaos\",\"arguments\":{},\"inputs\":{},\"outputs\":{},\"metadata\":{},\"container\":{\"name\":\"\",\"image\":\"litmuschaos/k8s:latest\",\"command\":[\"sh\",\"-c\"],\"args\":[\"kubectl delete chaosengine -l 'instance_id in (10ce4f8a-bd69-407d-ad6a-44d090af5544, )' -n apps \"],\"resources\":{}}}"
    time="2021-11-09T07:20:48.269Z" level=info msg="Waiting on main container"
    time="2021-11-09T07:20:50.000Z" level=info msg="main container started with container ID: ff9eb1e0c75a645ed3be4f105cd24d5fad801a053b00446a38fcc05801376e2b"
    time="2021-11-09T07:20:50.000Z" level=info msg="Starting annotations monitor"
    time="2021-11-09T07:20:50.009Z" level=info msg="Waiting for container ff9eb1e0c75a645ed3be4f105cd24d5fad801a053b00446a38fcc05801376e2b to complete"
    time="2021-11-09T07:20:50.009Z" level=info msg="Starting to wait completion of containerID ff9eb1e0c75a645ed3be4f105cd24d5fad801a053b00446a38fcc05801376e2b ..."
    time="2021-11-09T07:20:50.009Z" level=info msg="Starting deadline monitor"
    time="2021-11-09T07:25:48.269Z" level=info msg="Alloc=7177 TotalAlloc=28725 Sys=70592 NumGC=8 Goroutines=10"
    time="2021-11-09T07:30:48.269Z" level=info msg="Alloc=8990 TotalAlloc=42961 Sys=70592 NumGC=11 Goroutines=10"
    time="2021-11-09T07:35:48.269Z" level=info msg="Alloc=6584 TotalAlloc=57254 Sys=70592 NumGC=15 Goroutines=10"
    time="2021-11-09T07:40:48.269Z" level=info msg="Alloc=8260 TotalAlloc=71468 Sys=70592 NumGC=18 Goroutines=10"
    time="2021-11-09T07:45:48.269Z" level=info msg="Alloc=5377 TotalAlloc=85715 Sys=70592 NumGC=22 Goroutines=10"
    time="2021-11-09T07:50:48.269Z" level=info msg="Alloc=6985 TotalAlloc=99975 Sys=70592 NumGC=25 Goroutines=10"
    time="2021-11-09T07:55:48.269Z" level=info msg="Alloc=8826 TotalAlloc=114220 Sys=70592 NumGC=28 Goroutines=10"
    time="2021-11-09T08:00:48.269Z" level=info msg="Alloc=6432 TotalAlloc=128453 Sys=70592 NumGC=32 Goroutines=9"
    time="2021-11-09T08:05:48.269Z" level=info msg="Alloc=8085 TotalAlloc=142684 Sys=70592 NumGC=35 Goroutines=9"
    time="2021-11-09T08:10:48.269Z" level=info msg="Alloc=5527 TotalAlloc=156912 Sys=70592 NumGC=39 Goroutines=9"
    time="2021-11-09T08:15:48.269Z" level=info msg="Alloc=7282 TotalAlloc=171133 Sys=70592 NumGC=42 Goroutines=9"
    time="2021-11-09T08:20:48.269Z" level=info msg="Alloc=9005 TotalAlloc=185347 Sys=70592 NumGC=45 Goroutines=9"
    time="2021-11-09T08:25:48.269Z" level=info msg="Alloc=6320 TotalAlloc=199573 Sys=70592 NumGC=49 Goroutines=9"
    time="2021-11-09T08:30:48.269Z" level=info msg="Alloc=7982 TotalAlloc=213802 Sys=70592 NumGC=52 Goroutines=9"
    time="2021-11-09T08:35:48.269Z" level=info msg="Alloc=5650 TotalAlloc=228025 Sys=70592 NumGC=56 Goroutines=9"
    time="2021-11-09T08:40:48.269Z" level=info msg="Alloc=7498 TotalAlloc=242238 Sys=70592 NumGC=59 Goroutines=9"
    time="2021-11-09T08:45:48.269Z" level=info msg="Alloc=5307 TotalAlloc=256462 Sys=70592 NumGC=63 Goroutines=9"

    logs of main container

    chaosengine.litmuschaos.io "pod-deletebb2zc" deleted
  2. log of workflow container Workflow controller

    time="2021-11-09T07:31:03Z" level=info msg="SyncManager initialized successfully"
    time="2021-11-09T07:31:04Z" level=info msg="Processing workflow" namespace=apps workflow=custom-chaos-workflow-1636442211
    time="2021-11-09T07:31:04Z" level=info msg="node custom-chaos-workflow-1636442211-392630960 phase Succeeded -> Running" namespace=apps workflow=custom-chaos-workflow-1636442211
    time="2021-11-09T07:31:04Z" level=info msg="Step group node &NodeStatus{ID:custom-chaos-workflow-1636442211-392630960,Name:custom-chaos-workflow-1636442211[0],DisplayName:[0],Type:StepGroup,TemplateName:custom-chaos,TemplateRef:nil,Phase:Running,BoundaryID:custom-chaos-workflow-1636442211,Message:,StartedAt:2021-11-09 07:16:59 +0000 UTC,FinishedAt:2021-11-09 07:17:34 +0000 UTC,PodIP:,Daemoned:nil,Inputs:nil,Outputs:nil,Children:[custom-chaos-workflow-1636442211-2863068657],OutboundNodes:[],StoredTemplateID:,WorkflowTemplateName:,TemplateScope:local/custom-chaos-workflow-1636442211,ResourcesDuration:ResourcesDuration{},HostNodeName:,MemoizationStatus:nil,} successful" namespace=apps workflow=custom-chaos-workflow-1636442211
    time="2021-11-09T07:31:04Z" level=info msg="node custom-chaos-workflow-1636442211-392630960 phase Running -> Succeeded" namespace=apps workflow=custom-chaos-workflow-1636442211
    time="2021-11-09T07:31:04Z" level=info msg="node custom-chaos-workflow-1636442211-1533361957 phase Succeeded -> Running" namespace=apps workflow=custom-chaos-workflow-1636442211
    time="2021-11-09T07:31:04Z" level=info msg="SG Outbound nodes of custom-chaos-workflow-1636442211-2863068657 are [custom-chaos-workflow-1636442211-2863068657]" namespace=apps workflow=custom-chaos-workflow-1636442211
    time="2021-11-09T07:31:04Z" level=info msg="Step group node &NodeStatus{ID:custom-chaos-workflow-1636442211-1533361957,Name:custom-chaos-workflow-1636442211[1],DisplayName:[1],Type:StepGroup,TemplateName:custom-chaos,TemplateRef:nil,Phase:Running,BoundaryID:custom-chaos-workflow-1636442211,Message:,StartedAt:2021-11-09 07:17:34 +0000 UTC,FinishedAt:2021-11-09 07:20:46 +0000 UTC,PodIP:,Daemoned:nil,Inputs:nil,Outputs:nil,Children:[custom-chaos-workflow-1636442211-2297452788],OutboundNodes:[],StoredTemplateID:,WorkflowTemplateName:,TemplateScope:local/custom-chaos-workflow-1636442211,ResourcesDuration:ResourcesDuration{},HostNodeName:,MemoizationStatus:nil,} successful" namespace=apps workflow=custom-chaos-workflow-1636442211
    time="2021-11-09T07:31:04Z" level=info msg="node custom-chaos-workflow-1636442211-1533361957 phase Running -> Succeeded" namespace=apps workflow=custom-chaos-workflow-1636442211
    time="2021-11-09T07:31:04Z" level=info msg="SG Outbound nodes of custom-chaos-workflow-1636442211-2297452788 are [custom-chaos-workflow-1636442211-2297452788]" namespace=apps workflow=custom-chaos-workflow-1636442211
    time="2021-11-09T07:31:04Z" level=info msg="Workflow step group node custom-chaos-workflow-1636442211-1533214862 not yet completed" namespace=apps workflow=custom-chaos-workflow-1636442211
    time="2021-11-09T07:31:04Z" level=info msg="Workflow update successful" namespace=apps phase=Running resourceVersion=219629543 workflow=custom-chaos-workflow-1636442211
    time="2021-11-09T07:31:05Z" level=info msg="Processing workflow" namespace=apps workflow=custom-chaos-workflow-1636442211
    time="2021-11-09T07:31:05Z" level=info msg="node custom-chaos-workflow-1636442211-392630960 phase Succeeded -> Running" namespace=apps workflow=custom-chaos-workflow-1636442211
    time="2021-11-09T07:31:05Z" level=info msg="Step group node &NodeStatus{ID:custom-chaos-workflow-1636442211-392630960,Name:custom-chaos-workflow-1636442211[0],DisplayName:[0],Type:StepGroup,TemplateName:custom-chaos,TemplateRef:nil,Phase:Running,BoundaryID:custom-chaos-workflow-1636442211,Message:,StartedAt:2021-11-09 07:16:59 +0000 UTC,FinishedAt:2021-11-09 07:17:34 +0000 UTC,PodIP:,Daemoned:nil,Inputs:nil,Outputs:nil,Children:[custom-chaos-workflow-1636442211-2863068657],OutboundNodes:[],StoredTemplateID:,WorkflowTemplateName:,TemplateScope:local/custom-chaos-workflow-1636442211,ResourcesDuration:ResourcesDuration{},HostNodeName:,MemoizationStatus:nil,} successful" namespace=apps workflow=custom-chaos-workflow-1636442211
    time="2021-11-09T07:31:05Z" level=info msg="node custom-chaos-workflow-1636442211-392630960 phase Running -> Succeeded" namespace=apps workflow=custom-chaos-workflow-1636442211
    time="2021-11-09T07:31:05Z" level=info msg="node custom-chaos-workflow-1636442211-1533361957 phase Succeeded -> Running" namespace=apps workflow=custom-chaos-workflow-1636442211
    time="2021-11-09T07:31:05Z" level=info msg="SG Outbound nodes of custom-chaos-workflow-1636442211-2863068657 are [custom-chaos-workflow-1636442211-2863068657]" namespace=apps workflow=custom-chaos-workflow-1636442211
    time="2021-11-09T07:31:05Z" level=info msg="Step group node &NodeStatus{ID:custom-chaos-workflow-1636442211-1533361957,Name:custom-chaos-workflow-1636442211[1],DisplayName:[1],Type:StepGroup,TemplateName:custom-chaos,TemplateRef:nil,Phase:Running,BoundaryID:custom-chaos-workflow-1636442211,Message:,StartedAt:2021-11-09 07:17:34 +0000 UTC,FinishedAt:2021-11-09 07:20:46 +0000 UTC,PodIP:,Daemoned:nil,Inputs:nil,Outputs:nil,Children:[custom-chaos-workflow-1636442211-2297452788],OutboundNodes:[],StoredTemplateID:,WorkflowTemplateName:,TemplateScope:local/custom-chaos-workflow-1636442211,ResourcesDuration:ResourcesDuration{},HostNodeName:,MemoizationStatus:nil,} successful" namespace=apps workflow=custom-chaos-workflow-1636442211
    time="2021-11-09T07:31:05Z" level=info msg="node custom-chaos-workflow-1636442211-1533361957 phase Running -> Succeeded" namespace=apps workflow=custom-chaos-workflow-1636442211
    time="2021-11-09T07:31:05Z" level=info msg="SG Outbound nodes of custom-chaos-workflow-1636442211-2297452788 are [custom-chaos-workflow-1636442211-2297452788]" namespace=apps workflow=custom-chaos-workflow-1636442211
    time="2021-11-09T07:31:05Z" level=info msg="Workflow step group node custom-chaos-workflow-1636442211-1533214862 not yet completed" namespace=apps workflow=custom-chaos-workflow-1636442211
    time="2021-11-09T07:31:05Z" level=info msg="Workflow update successful" namespace=apps phase=Running resourceVersion=219629543 workflow=custom-chaos-workflow-1636442211
    time="2021-11-09T07:36:03Z" level=info msg="Alloc=5134 TotalAlloc=22697 Sys=70080 NumGC=9 Goroutines=162"
    W1109 07:39:18.429545       1 reflector.go:328] pkg/mod/k8s.io/client-go@v0.17.8/tools/cache/reflector.go:105: watch of *v1.ConfigMap ended with: too old resource version: 219633463 (219635942)
    time="2021-11-09T07:41:03Z" level=info msg="Alloc=5433 TotalAlloc=23068 Sys=70080 NumGC=11 Goroutines=162"
    time="2021-11-09T07:46:03Z" level=info msg="Alloc=5262 TotalAlloc=23146 Sys=70080 NumGC=14 Goroutines=162"
    W1109 07:47:06.454720       1 reflector.go:328] pkg/mod/k8s.io/client-go@v0.17.8/tools/cache/reflector.go:105: watch of *v1.ConfigMap ended with: too old resource version: 219636637 (219638936)
    time="2021-11-09T07:51:03Z" level=info msg="Alloc=5257 TotalAlloc=23349 Sys=70080 NumGC=16 Goroutines=162"
    time="2021-11-09T07:56:03Z" level=info msg="Alloc=5247 TotalAlloc=23508 Sys=70080 NumGC=19 Goroutines=162"
    W1109 07:56:35.467112       1 reflector.go:328] pkg/mod/k8s.io/client-go@v0.17.8/tools/cache/reflector.go:105: watch of *v1.ConfigMap ended with: too old resource version: 219639620 (219642575)
    time="2021-11-09T08:01:03Z" level=info msg="Alloc=5395 TotalAlloc=23784 Sys=70080 NumGC=21 Goroutines=162"
    W1109 08:04:43.482587       1 reflector.go:328] pkg/mod/k8s.io/client-go@v0.17.8/tools/cache/reflector.go:105: watch of *v1.ConfigMap ended with: too old resource version: 219643241 (219645670)
    time="2021-11-09T08:06:03Z" level=info msg="Alloc=5296 TotalAlloc=23986 Sys=70080 NumGC=24 Goroutines=162"
    time="2021-11-09T08:11:03Z" level=info msg="Alloc=5293 TotalAlloc=24124 Sys=70080 NumGC=26 Goroutines=162"
    W1109 08:11:14.493778       1 reflector.go:328] pkg/mod/k8s.io/client-go@v0.17.8/tools/cache/reflector.go:105: watch of 
  3. log of Operator

    {"level":"info","ts":1636442909.4029894,"logger":"cmd","msg":"Go Version: go1.17.1"}
    {"level":"info","ts":1636442909.403055,"logger":"cmd","msg":"Go OS/Arch: linux/amd64"}
    {"level":"info","ts":1636442909.403061,"logger":"cmd","msg":"Version of operator-sdk: v0.15.2"}
    {"level":"info","ts":1636442909.4030652,"logger":"leader","msg":"Trying to become the leader."}
    {"level":"info","ts":1636442910.340051,"logger":"leader","msg":"No pre-existing lock was found."}
    {"level":"info","ts":1636442910.3945394,"logger":"leader","msg":"Became the leader."}
    {"level":"info","ts":1636442911.57779,"logger":"metrics","msg":"Metrics Service object updated","Service.Name":"chaos-operator-metrics","Service.Namespace":"apps"}
    {"level":"info","ts":1636442912.4855528,"logger":"controller-runtime.metrics","msg":"metrics server is starting to listen","addr":"0.0.0.0:8383"}
    {"level":"info","ts":1636442912.4859805,"logger":"cmd","msg":"Starting the Chaos-Operator..."}
    {"level":"info","ts":1636442912.486305,"logger":"controller-runtime.manager","msg":"starting metrics server","path":"/metrics"}
    {"level":"info","ts":1636442912.4864068,"logger":"controller-runtime.controller","msg":"Starting EventSource","controller":"chaosengine-controller","source":"kind source: /, Kind="}
    {"level":"info","ts":1636442912.5876007,"logger":"controller-runtime.controller","msg":"Starting EventSource","controller":"chaosengine-controller","source":"kind source: /, Kind="}
    {"level":"info","ts":1636442912.6885102,"logger":"controller-runtime.controller","msg":"Starting Controller","controller":"chaosengine-controller"}
    {"level":"info","ts":1636442912.6885645,"logger":"controller-runtime.controller","msg":"Starting workers","controller":"chaosengine-controller","worker count":1}
    {"level":"info","ts":1636442912.688661,"logger":"controller_chaosengine","msg":"Reconciling ChaosEngine","Request.Namespace":"litmus","Request.Name":"pod-network-lossdpc6p"}
    {"level":"info","ts":1636442912.688717,"logger":"controller_chaosengine","msg":"Checking if there are any chaos resources to be deleted for","chaosengine":"pod-network-lossdpc6p"}
    {"level":"error","ts":1636442913.6916902,"logger":"controller-runtime.controller","msg":"Reconciler error","controller":"chaosengine-controller","request":"litmus/pod-network-lossdpc6p","error":"the server could not find the requested resource","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/zapr@v0.1.1/zapr.go:128\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.4.0/pkg/internal/controller/controller.go:258\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.4.0/pkg/internal/controller/controller.go:232\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.4.0/pkg/internal/controller/controller.go:211\nk8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/go/pkg/mod/k8s.io/apimachinery@v0.0.0-20191004115801-a2eda9f80ab8/pkg/util/wait/wait.go:152\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/pkg/mod/k8s.io/apimachinery@v0.0.0-20191004115801-a2eda9f80ab8/pkg/util/wait/wait.go:153\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/pkg/mod/k8s.io/apimachinery@v0.0.0-20191004115801-a2eda9f80ab8/pkg/util/wait/wait.go:88"}
    {"level":"info","ts":1636442914.6925983,"logger":"controller_chaosengine","msg":"Reconciling ChaosEngine","Request.Namespace":"apps","Request.Name":"pod-deletebb2zc"}
    {"level":"info","ts":1636442914.6926668,"logger":"controller_chaosengine","msg":"Checking if there are any chaos resources to be deleted for","chaosengine":"pod-deletebb2zc"}
    {"level":"error","ts":1636442915.6952085,"logger":"controller-runtime.controller","msg":"Reconciler error","controller":"chaosengine-controller","request":"apps/pod-deletebb2zc","error":"the server could not find the requested resource","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/zapr@v0.1.1/zapr.go:128\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.4.0/pkg/internal/controller/controller.go:258\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.4.0/pkg/internal/controller/controller.go:232\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.4.0/pkg/internal/controller/controller.go:211\nk8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/go/pkg/mod/k8s.io/apimachinery@v0.0.0-20191004115801-a2eda9f80ab8/pkg/util/wait/wait.go:152\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/pkg/mod/k8s.io/apimachinery@v0.0.0-20191004115801-a2eda9f80ab8/pkg/util/wait/wait.go:153\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/pkg/mod/k8s.io/apimachinery@v0.0.0-20191004115801-a2eda9f80ab8/pkg/util/wait/wait.go:88"}
    {"level":"info","ts":1636442916.696363,"logger":"controller_chaosengine","msg":"Reconciling ChaosEngine","Request.Namespace":"litmus","Request.Name":"pod-deleteh9tj7"}
    {"level":"info","ts":1636442916.6964343,"logger":"controller_chaosengine","msg":"Checking if there are any chaos resources to be deleted for","chaosengine":"pod-deleteh9tj7"}
    {"level":"error","ts":1636442917.6990752,"logger":"controller-runtime.controller","msg":"Reconciler error","controller":"chaosengine-controller","request":"litmus/pod-deleteh9tj7","error":"the server could not find the requested resource","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/zapr@v0.1.1/zapr.go:128\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.4.0/pkg/internal/controller/controller.go:258\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.4.0/pkg/internal/controller/controller.go:232\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.4.0/pkg/internal/controller/controller.go:211\nk8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/go/pkg/mod/k8s.io/apimachinery@v0.0.0-20191004115801-a2eda9f80ab8/pkg/util/wait/wait.go:152\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/pkg/mod/k8s.io/apimachinery@v0.0.0-20191004115801-a2eda9f80ab8/pkg/util/wait/wait.go:153\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/pkg/mod/k8s.io/apimachinery@v0.0.0-20191004115801-a2eda9f80ab8/pkg/util/wait/wait.go:88"}
    {"level":"info","ts":1636442918.7001317,"logger":"controller_chaosengine","msg":"Reconciling ChaosEngine","Request.Namespace":"litmus","Request.Name":"pod-deletelb6zd"}
    {"level":"info","ts":1636442918.7002358,"logger":"controller_chaosengine","msg":"Checking if there are any chaos resources to be deleted for","chaosengine":"pod-deletelb6zd"}
    {"level":"error","ts":1636442919.7059863,"logger":"controller-runtime.controller","msg":"Reconciler error","controller":"chaosengine-controller","request":"litmus/pod-deletelb6zd","error":"the server could not find the requested resource","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/zapr@v0.1.1/zapr.go:128\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.4.0/pkg/internal/controller/controller.go:258\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.4.0/pkg/internal/controller/controller.go:232\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.4.0/pkg/internal/controller/controller.go:211\nk8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/go/pkg/mod/k8s.io/apimachinery@v0.0.0-20191004115801-a2eda9f80ab8/pkg/util/wait/wait.go:152\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/pkg/mod/k8s.io/apimachinery@v0.0.0-20191004115801-a2eda9f80ab8/pkg/util/wait/wait.go:153\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/pkg/mod/k8s.io/apimachinery@v0.0.0-20191004115801-a2eda9f80ab8/pkg/util/wait/wait.go:88"}
    {"level":"info","ts":1636442920.707175,"logger":"controller_chaosengine","msg":"Reconciling ChaosEngine","Request.Namespace":"litmus","Request.Name":"node-cpu-hogfcg77"}
    {"level":"info","ts":1636442920.7072418,"logger":"controller_chaosengine","msg":"Checking if there are any chaos resources to be deleted for","chaosengine":"node-cpu-hogfcg77"}
    {"level":"error","ts":1636442921.7098908,"logger":"controller-runtime.controller","msg":"Reconciler error","controller":"chaosengine-controller","request":"litmus/node-cpu-hogfcg77","error":"the server could not find the requested resource","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/zapr@v0.1.1/zapr.go:128\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.4.0/pkg/internal/controller/controller.go:258\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.4.0/pkg/internal/controller/controller.go:232\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.4.0/pkg/internal/controller/controller.go:211\nk8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/go/pkg/mod/k8s.io/apimachinery@v0.0.0-20191004115801-a2eda9f80ab8/pkg/util/wait/wait.go:152\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/pkg/mod/k8s.io/apimachinery@v0.0.0-20191004115801-a2eda9f80ab8/pkg/util/wait/wait.go:153\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/pkg/mod/k8s.io/apimachinery@v0.0.0-20191004115801-a2eda9f80ab8/pkg/util/wait/wait.go:88"}
    {"level":"info","ts":1636442922.7106764,"logger":"controller_chaosengine","msg":"Reconciling ChaosEngine","Request.Namespace":"litmus","Request.Name":"catalogue-disk-fill6lbcd"}
    {"level":"info","ts":1636442922.7107477,"logger":"controller_chaosengine","msg":"Checking if there are any chaos resources to be deleted for","chaosengine":"catalogue-disk-fill6lbcd"}
    {"level":"error","ts":1636442923.7128215,"logger":"controller-runtime.controller","msg":"Reconciler error","controller":"chaosengine-controller","request":"litmus/catalogue-disk-fill6lbcd","error":"the server could not find the requested resource","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\t/go/pkg/mod/github.com/go-logr/zapr@v0.1.1/zapr.go:128\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.4.0/pkg/internal/controller/controller.go:258\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.4.0/pkg/internal/controller/controller.go:232\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).worker\n\t/go/pkg/mod/sigs.k8s.io/controller-runtime@v0.4.0/pkg/internal/controller/controller.go:211\nk8s.io/apimachinery/pkg/util/wait.JitterUntil.func1\n\t/go/pkg/mod/k8s.io/apimachinery@v0.0.0-20191004115801-a2eda9f80ab8/pkg/util/wait/wait.go:152\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\t/go/pkg/mod/k8s.io/apimachinery@v0.0.0-20191004115801-a2eda9f80ab8/pkg/util/wait/wait.go:153\nk8s.io/apimachinery/pkg/util/wait.Until\n\t/go/pkg/mod/k8s.io/apimachinery@v0.0.0-20191004115801-a2eda9f80ab8/pkg/util/wait/wait.go:88"}
yogeshkk commented 3 years ago

There are two workflow pods. output is as below

First One

init time="2021-11-09T07:17:36.827Z" level=info msg="Starting Workflow Executor" version=v2.11.0
init time="2021-11-09T07:17:36.831Z" level=info msg="Creating a K8sAPI executor"
init time="2021-11-09T07:17:36.831Z" level=info msg="Executor (version: v2.11.0, build_date: 2020-09-17T22:51:06Z) initialized (pod: apps/custom-chaos-workflow-1636442211-2297452788) with template:\n{\"name\":\"pod-delete\",\"arguments\":{},\"inputs\":{\"artifacts\":[{\"name\":\"pod-delete\",\"path\":\"/tmp/chaosengine-pod-delete.yaml\",\"raw\":{\"data\":\"apiVersion: litmuschaos.io/v1alpha1\\nkind: ChaosEngine\\nmetadata:\\n  namespace: \\\"apps\\\"\\n  generateName: pod-delete\\n  labels:\\n    instance_id: 10ce4f8a-bd69-407d-ad6a-44d090af5544\\n    context: pod-delete_apps\\n    workflow_name: custom-chaos-workflow-1636442211\\nspec:\\n  appinfo:\\n    appns: default\\n    applabel: app=jenkins\\n    appkind: statefulset\\n  jobCleanUpPolicy: retain\\n  engineState: active\\n  chaosServiceAccount: litmus-admin\\n  experiments:\\n    - name: pod-delete\\n      spec:\\n        components:\\n          env:\\n            - name: TOTAL_CHAOS_DURATION\\n              value: \\\"30\\\"\\n            - name: CHAOS_INTERVAL\\n              value: \\\"10\\\"\\n            - name: FORCE\\n              value: \\\"false\\\"\\n            - name: PODS_AFFECTED_PERC\\n              value: \\\"\\\"\\n        probe: []\\n  annotationCheck: \\\"false\\\"\\n\"}}]},\"outputs\":{},\"metadata\":{\"labels\":{\"weight\":\"10\"}},\"container\":{\"name\":\"\",\"image\":\"litmuschaos/litmus-checker:latest\",\"args\":[\"-file=/tmp/chaosengine-pod-delete.yaml\",\"-saveName=/tmp/engine-name\"],\"resources\":{}}}"
init time="2021-11-09T07:17:36.832Z" level=info msg="Start loading input artifacts..."
init time="2021-11-09T07:17:36.832Z" level=info msg="Downloading artifact: pod-delete"
init time="2021-11-09T07:17:36.832Z" level=info msg="Detecting if /argo/inputs/artifacts/pod-delete.tmp is a tarball"
init time="2021-11-09T07:17:36.832Z" level=info msg="Successfully download file: /argo/inputs/artifacts/pod-delete"
init time="2021-11-09T07:17:36.832Z" level=info msg="Alloc=5443 TotalAlloc=12959 Sys=70336 NumGC=4 Goroutines=4"
wait time="2021-11-09T07:17:38.546Z" level=info msg="Starting Workflow Executor" version=v2.11.0
wait time="2021-11-09T07:17:38.551Z" level=info msg="Creating a K8sAPI executor"
wait time="2021-11-09T07:17:38.551Z" level=info msg="Executor (version: v2.11.0, build_date: 2020-09-17T22:51:06Z) initialized (pod: apps/custom-chaos-workflow-1636442211-2297452788) with template:\n{\"name\":\"pod-delete\",\"arguments\":{},\"inputs\":{\"artifacts\":[{\"name\":\"pod-delete\",\"path\":\"/tmp/chaosengine-pod-delete.yaml\",\"raw\":{\"data\":\"apiVersion: litmuschaos.io/v1alpha1\\nkind: ChaosEngine\\nmetadata:\\n  namespace: \\\"apps\\\"\\n  generateName: pod-delete\\n  labels:\\n    instance_id: 10ce4f8a-bd69-407d-ad6a-44d090af5544\\n    context: pod-delete_apps\\n    workflow_name: custom-chaos-workflow-1636442211\\nspec:\\n  appinfo:\\n    appns: default\\n    applabel: app=jenkins\\n    appkind: statefulset\\n  jobCleanUpPolicy: retain\\n  engineState: active\\n  chaosServiceAccount: litmus-admin\\n  experiments:\\n    - name: pod-delete\\n      spec:\\n        components:\\n          env:\\n            - name: TOTAL_CHAOS_DURATION\\n              value: \\\"30\\\"\\n            - name: CHAOS_INTERVAL\\n              value: \\\"10\\\"\\n            - name: FORCE\\n              value: \\\"false\\\"\\n            - name: PODS_AFFECTED_PERC\\n              value: \\\"\\\"\\n        probe: []\\n  annotationCheck: \\\"false\\\"\\n\"}}]},\"outputs\":{},\"metadata\":{\"labels\":{\"weight\":\"10\"}},\"container\":{\"name\":\"\",\"image\":\"litmuschaos/litmus-checker:latest\",\"args\":[\"-file=/tmp/chaosengine-pod-delete.yaml\",\"-saveName=/tmp/engine-name\"],\"resources\":{}}}"
wait time="2021-11-09T07:17:38.551Z" level=info msg="Waiting on main container"
wait time="2021-11-09T07:17:40.943Z" level=info msg="main container started with container ID: 5f41bc481c953f1030a481a5c4836b6fbfb9b14ffa00ba01e9c788e3b977d85c"
wait time="2021-11-09T07:17:40.943Z" level=info msg="Starting annotations monitor"
wait time="2021-11-09T07:17:40.948Z" level=info msg="Waiting for container 5f41bc481c953f1030a481a5c4836b6fbfb9b14ffa00ba01e9c788e3b977d85c to complete"
wait time="2021-11-09T07:17:40.948Z" level=info msg="Starting to wait completion of containerID 5f41bc481c953f1030a481a5c4836b6fbfb9b14ffa00ba01e9c788e3b977d85c ..."
wait time="2021-11-09T07:17:40.948Z" level=info msg="Starting deadline monitor"
wait time="2021-11-09T07:20:44.952Z" level=info msg="ContainerID \"5f41bc481c953f1030a481a5c4836b6fbfb9b14ffa00ba01e9c788e3b977d85c\" is terminated: &ContainerStatus{Name:main,State:ContainerState{Waiting:nil,Running:nil,Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2021-11-09 07:17:40 +0000 UTC,FinishedAt:2021-11-09 07:20:44 +0000 UTC,ContainerID:docker://5f41bc481c953f1030a481a5c4836b6fbfb9b14ffa00ba01e9c788e3b977d85c,},},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:732440210582.dkr.ecr.eu-west-1.amazonaws.com/external/litmuschaos/litmus-checker:latest,ImageID:docker-pullable://732440210582.dkr.ecr.eu-west-1.amazonaws.com/external/litmuschaos/litmus-checker@sha256:502c02298827921de2dbf5ea9619ccede3af21e0c2d46abd050e950b4a6bb4a1,ContainerID:docker://5f41bc481c953f1030a481a5c4836b6fbfb9b14ffa00ba01e9c788e3b977d85c,Started:nil,}"
wait time="2021-11-09T07:20:44.952Z" level=info msg="Main container completed"
wait time="2021-11-09T07:20:44.952Z" level=info msg="No Script output reference in workflow. Capturing script output ignored"
wait time="2021-11-09T07:20:44.952Z" level=info msg="Capturing script exit code"
wait time="2021-11-09T07:20:44.952Z" level=info msg="Getting exit code of 5f41bc481c953f1030a481a5c4836b6fbfb9b14ffa00ba01e9c788e3b977d85c"
wait time="2021-11-09T07:20:44.952Z" level=info msg="Annotations monitor stopped"
wait time="2021-11-09T07:20:44.954Z" level=info msg="No output parameters"
wait time="2021-11-09T07:20:44.954Z" level=info msg="No output artifacts"
wait time="2021-11-09T07:20:44.954Z" level=info msg="Annotating pod with output"
wait time="2021-11-09T07:20:44.985Z" level=info msg="Deadline monitor stopped"
wait time="2021-11-09T07:20:45.081Z" level=info msg="Killing sidecars"
wait time="2021-11-09T07:20:45.084Z" level=info msg="Alloc=7810 TotalAlloc=28734 Sys=70592 NumGC=8 Goroutines=8"
main W1109 07:17:40.145930       1 client_config.go:552] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
main 2021/11/09 07:17:44 
main ChaosEngine Name : pod-deletebb2zc
main 2021/11/09 07:17:44 Created Resource Details: 
main {pod-deletebb2zc litmuschaos.io v1alpha1 ChaosEngine apps }
main W1109 07:17:44.384555       1 client_config.go:552] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
main 2021/11/09 07:17:44 Starting Chaos Checker in 1min
main 2021/11/09 07:18:44 Checking if Engine Completed or Stopped
main 2021/11/09 07:19:44 Checking if Engine Completed or Stopped
main 2021/11/09 07:20:44 Checking if Engine Completed or Stopped
main 2021/11/09 07:20:44 [*] ENGINE COMPLETED

Second one


wait time="2021-11-09T07:17:01.777Z" level=info msg="Starting Workflow Executor" version=v2.11.0
wait time="2021-11-09T07:17:01.782Z" level=info msg="Creating a K8sAPI executor"
wait time="2021-11-09T07:17:01.782Z" level=info msg="Executor (version: v2.11.0, build_date: 2020-09-17T22:51:06Z) initialized (pod: apps/custom-chaos-workflow-1636442211-2863068657) with template:\n{\"name\":\"install-chaos-experiments\",\"arguments\":{},\"inputs\":{\"artifacts\":[{\"name\":\"pod-delete\",\"path\":\"/tmp/pod-delete.yaml\",\"raw\":{\"data\":\"apiVersion: litmuschaos.io/v1alpha1\\ndescription:\\n  message: |\\n    Deletes a pod belonging to a deployment/statefulset/daemonset\\nkind: ChaosExperiment\\nmetadata:\\n  name: pod-delete\\n  labels:\\n    name: pod-delete\\n    app.kubernetes.io/part-of: litmus\\n    app.kubernetes.io/component: chaosexperiment\\n    app.kubernetes.io/version: 2.1.1\\nspec:\\n  definition:\\n    scope: Namespaced\\n    permissions:\\n      - apiGroups:\\n          - \\\"\\\"\\n          - apps\\n          - apps.openshift.io\\n          - argoproj.io\\n          - batch\\n          - litmuschaos.io\\n        resources:\\n          - deployments\\n          - jobs\\n          - pods\\n          - pods/log\\n          - replicationcontrollers\\n          - deployments\\n          - statefulsets\\n          - daemonsets\\n          - replicasets\\n          - deploymentconfigs\\n          - rollouts\\n          - pods/exec\\n          - events\\n          - chaosengines\\n          - chaosexperiments\\n          - chaosresults\\n        verbs:\\n          - create\\n          - list\\n          - get\\n          - patch\\n          - update\\n          - delete\\n          - deletecollection\\n    image: litmuschaos/go-runner:2.1.1\\n    imagePullPolicy: Always\\n    args:\\n      - -c\\n      - ./experiments -name pod-delete\\n    command:\\n      - /bin/bash\\n    env:\\n      - name: TOTAL_CHAOS_DURATION\\n        value: \\\"15\\\"\\n      - name: RAMP_TIME\\n        value: \\\"\\\"\\n      - name: FORCE\\n        value: \\\"true\\\"\\n      - name: CHAOS_INTERVAL\\n        value: \\\"5\\\"\\n      - name: PODS_AFFECTED_PERC\\n        value: \\\"\\\"\\n      - name: LIB\\n        value: litmus\\n      - name: TARGET_PODS\\n        value: \\\"\\\"\\n      - name: SEQUENCE\\n        value: parallel\\n    labels:\\n      name: pod-delete\\n      app.kubernetes.io/part-of: litmus\\n      app.kubernetes.io/component: experiment-job\\n      app.kubernetes.io/version: 2.1.1\\n\"}}]},\"outputs\":{},\"metadata\":{},\"container\":{\"name\":\"\",\"image\":\"litmuschaos/k8s:latest\",\"command\":[\"sh\",\"-c\"],\"args\":[\"kubectl apply -f /tmp/pod-delete.yaml -n apps |  sleep 30\"],\"resources\":{}}}"
wait time="2021-11-09T07:17:01.782Z" level=info msg="Waiting on main container"
wait time="2021-11-09T07:17:03.372Z" level=info msg="main container started with container ID: d92afcc839d4071a131fde84cf5d5c38e4fc7032e48aae26a10d6a9399ddb62f"
wait time="2021-11-09T07:17:03.372Z" level=info msg="Starting annotations monitor"
wait time="2021-11-09T07:17:03.377Z" level=info msg="Waiting for container d92afcc839d4071a131fde84cf5d5c38e4fc7032e48aae26a10d6a9399ddb62f to complete"
wait time="2021-11-09T07:17:03.377Z" level=info msg="Starting to wait completion of containerID d92afcc839d4071a131fde84cf5d5c38e4fc7032e48aae26a10d6a9399ddb62f ..."
wait time="2021-11-09T07:17:03.377Z" level=info msg="Starting deadline monitor"
wait time="2021-11-09T07:17:34.383Z" level=info msg="ContainerID \"d92afcc839d4071a131fde84cf5d5c38e4fc7032e48aae26a10d6a9399ddb62f\" is terminated: &ContainerStatus{Name:main,State:ContainerState{Waiting:nil,Running:nil,Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2021-11-09 07:17:03 +0000 UTC,FinishedAt:2021-11-09 07:17:33 +0000 UTC,ContainerID:docker://d92afcc839d4071a131fde84cf5d5c38e4fc7032e48aae26a10d6a9399ddb62f,},},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:732440210582.dkr.ecr.eu-west-1.amazonaws.com/external/litmuschaos/k8s:latest,ImageID:docker-pullable://732440210582.dkr.ecr.eu-west-1.amazonaws.com/external/litmuschaos/k8s@sha256:b83033ee4b4d4fa3c1395e110ef3c7c3f81b487a2bc03581ee37bad3d8381ffd,ContainerID:docker://d92afcc839d4071a131fde84cf5d5c38e4fc7032e48aae26a10d6a9399ddb62f,Started:nil,}"
wait time="2021-11-09T07:17:34.383Z" level=info msg="Main container completed"
wait time="2021-11-09T07:17:34.383Z" level=info msg="No Script output reference in workflow. Capturing script output ignored"
wait time="2021-11-09T07:17:34.383Z" level=info msg="Capturing script exit code"
wait time="2021-11-09T07:17:34.383Z" level=info msg="Getting exit code of d92afcc839d4071a131fde84cf5d5c38e4fc7032e48aae26a10d6a9399ddb62f"
wait time="2021-11-09T07:17:34.383Z" level=info msg="Annotations monitor stopped"
wait time="2021-11-09T07:17:34.387Z" level=info msg="No output parameters"
wait time="2021-11-09T07:17:34.387Z" level=info msg="No output artifacts"
wait time="2021-11-09T07:17:34.387Z" level=info msg="Annotating pod with output"
wait time="2021-11-09T07:17:34.402Z" level=info msg="Killing sidecars"
wait time="2021-11-09T07:17:34.408Z" level=info msg="Alloc=5476 TotalAlloc=18071 Sys=70336 NumGC=6 Goroutines=9"
Screenshot 2021-11-09 at 2 22 24 PM