Open nayuta-ai opened 1 year ago
An issue which go's unit-test could pass on error case encountered.
$ go test vpa_test.go
ok command-line-arguments 0.074s
Go's test passes but, it is error case.
Container Name: vpa-container
CPU usage: 0
Memory usage: 841252864
Container Name: no-vpa-container
CPU usage: 0
Memory usage: 487424
Container Name: no-vpa-container
CPU usage: 0
Memory usage: 774144
Container Name: vpa-container
CPU usage: 0
Memory usage: 840982528
Container Name: vpa-container
CPU usage: 0
Memory usage: 79966208
Container Name: no-vpa-container
CPU usage: 0
Memory usage: 577536
Container Name: no-vpa-container
CPU usage: 0
Memory usage: 524288
Container Name: vpa-container
CPU usage: 0
Memory usage: 840994816
https://github.com/nayuta-ai/cloud_storage/commit/47bdda68a7a4e61bf9d39a6e1b57b1ece306feb3
First, this error caused a failure of the acquired CPU type conversion. The type of CPU resources is a float type, but my previous code converted the CPU resources into an int64 type. So, my modification was to convert inf64 into inf.Dec for CPU resources. As a result, I could obtain the test results as expected.
$ go test ./test -v
=== RUN TestVpa
vpa_test.go:34: CPU resources is out of range.:0.418555525
vpa_test.go:34: CPU resources is out of range.:0.427232439
vpa_test.go:34: CPU resources is out of range.:0.420437131
vpa_test.go:37: Memory resources is out of range.:62353408
vpa_test.go:34: CPU resources is out of range.:0.401137166
--- FAIL: TestVpa (0.06s)
FAIL
FAIL cloud/test 0.069s
FAIL
Since implementing a unit test code must pass tests, modifying the testing method remains an issue.
https://github.com/nayuta-ai/cloud_storage/pull/6/commits/57758ddcc1aafa877bf619ea6dd2d494924da5ee
My unexpected result had caused when executing stress test code as following.
stress test code
func stressTest() {
// Connect to Kubernetes
config, client, err := connectToKubernetes()
if err != nil {
log.Println(err)
}
kubeclient := connectToDeploy(client, "default")
// Create Pods
err = createPod(*testObject, kubeclient)
if err != nil {
log.Println(err)
}
time.Sleep(60 * time.Second)
// Fetch pod lists for getting pods name or container name
pods, err := fetchPodList(client)
if err != nil {
log.Println(err)
}
// Fetch container metrics
cpu, memory, err := fetchMetrics(config, pods[0].Spec.Containers[0].Name)
if err != nil {
log.Println(err)
}
fmt.Println(cpu[0])
fmt.Println(memory[0])
// Test Spec (TODO)
// Execute stress command
stdin := os.Stdin
stdout := os.Stdout
stderr := os.Stderr
err = execCommand(config, client, "stress -m 1", pods[0], stdin, stdout, stderr)
if err != nil {
log.Println(err)
}
// Fetch container metrics after placing a load
cpu, memory, err = fetchMetrics(config, pods[0].Spec.Containers[0].Name)
if err != nil {
log.Println(err)
}
fmt.Println(cpu[0])
fmt.Println(memory[0])
// Test Spec (TODO)
time.Sleep(10 * time.Second)
err = deletePod(kubeclient, "sample-vpa-deployment")
if err != nil {
log.Println(err)
}
}
$ go run ./test
2022/10/21 17:23:18 deployments.apps "sample-vpa-deployment" already exists
0.000036560
475136
stress: info: [14] dispatching hogs: 0 cpu, 0 io, 1 vm, 0 hdd
My expected result is terminating the stress process and finishing all of the stress test only executing the above.
It suggests it caused by implementing execCommand function.
- execCommand
func execCommand(config rest.Config, clientset clientset.Clientset, command string, pod corev1.Pod, stdin io.Reader, stdout io.Writer, stderr io.Writer) error { cmd := []string{ "sh", "-c", command, } req := clientset.CoreV1().RESTClient().Post().Namespace(pod.Namespace). Name(pod.Name).Resource("pods").SubResource("exec") var option = &corev1.PodExecOptions{ Command: cmd, Stdin: true, Stdout: true, Stderr: true, TTY: true, } if stdin == nil { option.Stdin = false } req.VersionedParams(option, scheme.ParameterCodec) exec, err := remotecommand.NewSPDYExecutor(config, "POST", req.URL()) if err != nil { return err } err = exec.Stream(remotecommand.StreamOptions{ Stdin: stdin, Stdout: stdout, Stderr: stderr, }) if err != nil { return err } return nil }
If anyone knows where it caused, please comment on it.
The "stress" command is permanent, which prevented "execCommand" function from stopping. I can solve it by using two approach. One is to set q flag then exit container, and the other is to use go-routine. The former is not getting process logs, while the latter can be getting process logs on the command line. In this case, since I would like to check stress process working, I used go-routine.
go execCommand(config, client, "stress -m 1 --vm-bytes 52428800 --vm-hang 0", pods[0], stdin, stdout, stderr)
deae06bdcacf6e8d4711b38d130fe1cdea8f0da2
Background
The application involves changing resources such as VPA. The purpose of the issue is to implement the unit test for resources because of checking whether the API fetch resources are valid.
Goal
This issue will be closed when I implement the API and merge PR.
Approach
Overview
First, I will implement the API, which fetches the pod's resources using go-client.
To Do
Deadline
10/15
References
https://stackoverflow.com/questions/52763291/get-current-resource-usage-of-a-pod-in-kubernetes-with-go-client https://dev.to/narasimha1997/create-kubernetes-jobs-in-golang-using-k8s-client-go-api-59ej https://github.com/kubernetes/kubernetes/blob/master/staging/src/k8s.io/metrics/pkg/client/clientset/versioned/typed/metrics/v1beta1/podmetrics.go
Notes
None