Closed laurie-kepford closed 2 years ago
I was able to create a script that I could schedule with cron. Actually there are two. One to start it every hour and one to stop it at the end of each hour. So I end up with one hour chunks of data. Hope this helps someone else.
the start script looks like this:
#!/bin/bash
. /root/.bashrc
#test1ks
APP=test1ks
POD=$(KUBECONFIG=/root/.kube/config kubectl get pods -o name --no-headers=true -n $APP | cut -c 5-)
KUBECONFIG=/root/.kube/config kubectl sniff $POD -n $APP -c backend-$APP -o /home/ubuntu/wireshark/$APP-backend-"`date +"%d-%m-%Y-%H-%M"`".pcap &>/dev/null &
The kill script looks like this:
#!/bin/bash
pkill -9 sniff &>/dev/null &
I am trying to troubleshoot issues with two containers in the same pod. I have kubectl and sniff installed on my linux host.
I can run the following and it works: kubectl sniff pod-id -n namespace -c container1 -o ~/folder/container1"
date +"%d-%m-%Y-%H-%M"
".pcap kubectl sniff pod-id -n namespace -c container2 -o ~/folder/container2-"date +"%d-%m-%Y-%H-%M"
".pcapHowever, I have to open two terminal connections and run these and then open a third to look at the files. I need to leave this running until the crash I am trying to troubleshoot happens. Could be days. And if I close the terminal the job either quits or gets orphaned.
What I would like to do is run each script in the background.
Even better, set up a cronjob to start a fresh job every hour so I can have my file in 1 hour increments.
Any advice on how to do this?