litmuschaos / litmus-go

Apache License 2.0
66 stars 118 forks source link

Delete chaosresults as part of cleanup. #396

Open rbalagopalakrishna opened 3 years ago

rbalagopalakrishna commented 3 years ago

BUG REPORT

What happened: Deleting the chaosengine is only deleting the pods(runner, helper, experiment pod)which are being created during the chaos test.

What you expected to happen: It should also delete the chaosresults from the test.


#kubectl delete chaosengine -n litmus network-chaos-1 network-chaos-2
chaosengine.litmuschaos.io "network-chaos-1" deleted
chaosengine.litmuschaos.io "network-chaos-2" deleted

#kubectl get pods -n litmus
NAME                                READY   STATUS    RESTARTS   AGE
chaos-operator-ce-c7cc65966-zz5n4   1/1     Running   0          6h57m

#kubectl get chaosresults -n litmus
NAME                                  AGE
network-chaos-1-pod-network-latency   5m9s
network-chaos-2-pod-network-latency   3m26s```
ksatchit commented 3 years ago

chaosresult is a dedicated custom resource @rbalagopalakrishna . There is a case for it to exist independently (of chaosengine or its associated pods) in order to track failure/success history or generate some useful reports.

IMO a custom script/post-hook can be set up in case you are interested to remove them.

mdnyeemakhtar commented 3 years ago

@ksatchit in our use case we are generating chaosengine name and it is unique for each execution so it generates new chaosresult each time. And so no history maintained in that case. We want the results to be delete along with engine. As of now we are doing it the way you suggested by custom script. However, we want a more robust and integrated solution for it in engine itself. Can there be a flag in engine to delete results along with engine? By default it can be set to false and if someone (like us) needs it then they can enable it.

ksatchit commented 3 years ago

@mdnyeemakhtar I see. That is an interesting case. Let me get back on this.

mdnyeemakhtar commented 3 years ago

@mdnyeemakhtar I see. That is an interesting case. Let me get back on this.

Thank you @ksatchit