jfrog / kubexray

JFrog KubeXray scanner on Kubernetes
Apache License 2.0
25 stars 9 forks source link

slice bounds out of range panic on pod from daemonset #20

Closed iverberk closed 5 years ago

iverberk commented 5 years ago

When running kubexray on our cluster we get the following panic:

/usr/local/go/src/runtime/panic.go:522                                                                                                                                                 [39/267]
/usr/local/go/src/runtime/panic.go:54
/build/kubexray/handler.go:527
/build/kubexray/handler.go:359
/build/kubexray/controller.go:127
/build/kubexray/controller.go:56
/go/pkg/mod/k8s.io/apimachinery@v0.0.0-20181121071008-d4f83ca2e260/pkg/util/wait/wait.go:133
/go/pkg/mod/k8s.io/apimachinery@v0.0.0-20181121071008-d4f83ca2e260/pkg/util/wait/wait.go:134
/go/pkg/mod/k8s.io/apimachinery@v0.0.0-20181121071008-d4f83ca2e260/pkg/util/wait/wait.go:88
/build/kubexray/controller.go:47
/usr/local/go/src/runtime/asm_amd64.s:1337
E0329 18:50:01.174685       1 runtime.go:69] Observed a panic: "slice bounds out of range" (runtime error: slice bounds out of range)
/go/pkg/mod/k8s.io/apimachinery@v0.0.0-20181121071008-d4f83ca2e260/pkg/util/runtime/runtime.go:76
/go/pkg/mod/k8s.io/apimachinery@v0.0.0-20181121071008-d4f83ca2e260/pkg/util/runtime/runtime.go:65
/go/pkg/mod/k8s.io/apimachinery@v0.0.0-20181121071008-d4f83ca2e260/pkg/util/runtime/runtime.go:51
/usr/local/go/src/runtime/panic.go:522
/go/pkg/mod/k8s.io/apimachinery@v0.0.0-20181121071008-d4f83ca2e260/pkg/util/runtime/runtime.go:58
/usr/local/go/src/runtime/panic.go:522
/usr/local/go/src/runtime/panic.go:54
/build/kubexray/handler.go:527
/build/kubexray/handler.go:359
/build/kubexray/controller.go:127
/build/kubexray/controller.go:56
/go/pkg/mod/k8s.io/apimachinery@v0.0.0-20181121071008-d4f83ca2e260/pkg/util/wait/wait.go:133
/go/pkg/mod/k8s.io/apimachinery@v0.0.0-20181121071008-d4f83ca2e260/pkg/util/wait/wait.go:134
/go/pkg/mod/k8s.io/apimachinery@v0.0.0-20181121071008-d4f83ca2e260/pkg/util/wait/wait.go:88
/build/kubexray/controller.go:47
/usr/local/go/src/runtime/asm_amd64.s:1337
panic: runtime error: slice bounds out of range [recovered]
        panic: runtime error: slice bounds out of range [recovered]
        panic: runtime error: slice bounds out of range

goroutine 24 [running]:
k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
        /go/pkg/mod/k8s.io/apimachinery@v0.0.0-20181121071008-d4f83ca2e260/pkg/util/runtime/runtime.go:58 +0x105
panic(0x10b3d60, 0x1cf0b90)
        /usr/local/go/src/runtime/panic.go:522 +0x1b5
k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
        /go/pkg/mod/k8s.io/apimachinery@v0.0.0-20181121071008-d4f83ca2e260/pkg/util/runtime/runtime.go:58 +0x105
panic(0x10b3d60, 0x1cf0b90)
        /usr/local/go/src/runtime/panic.go:522 +0x1b5
main.checkResource(0x13d4880, 0xc000112700, 0xc0002cdc00, 0x1, 0xb, 0xf)
        /build/kubexray/handler.go:527 +0x52c
main.(*HandlerImpl).ObjectCreated(0xc00026a480, 0x13d4880, 0xc000112700, 0x11d28c0, 0xc0002cdc00)
        /build/kubexray/handler.go:359 +0xce
main.(*Controller).processNextQueueItem(0xc000209ef0, 0xc000631e00)
        /build/kubexray/controller.go:127 +0x2cf
main.(*Controller).runWorker(0xc000209ef0)
        /build/kubexray/controller.go:56 +0xcb
k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1(0xc00005d788)
        /go/pkg/mod/k8s.io/apimachinery@v0.0.0-20181121071008-d4f83ca2e260/pkg/util/wait/wait.go:133 +0x54
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000631f88, 0x3b9aca00, 0x0, 0x1, 0xc00008a900)
        /go/pkg/mod/k8s.io/apimachinery@v0.0.0-20181121071008-d4f83ca2e260/pkg/util/wait/wait.go:134 +0xf8
k8s.io/apimachinery/pkg/util/wait.Until(...)
        /go/pkg/mod/k8s.io/apimachinery@v0.0.0-20181121071008-d4f83ca2e260/pkg/util/wait/wait.go:88
main.(*Controller).Run(0xc000209ef0, 0xc00008a900)
        /build/kubexray/controller.go:47 +0x2f1
created by main.main
        /build/kubexray/main.go:165 +0x758

I believe this happens because we have a pod from a daemonset that doesn't adhere to a naming scheme that is expected in the checkResource function:

func checkResource(client kubernetes.Interface, pod *core_v1.Pod) (string, ResourceType) {
    subs1 := strings.LastIndexByte(pod.Name, '-')
    subs2 := strings.LastIndexByte(pod.Name[:subs1], '-')
    sets := client.AppsV1().StatefulSets(pod.Namespace)
    _, err := sets.Get(pod.Name[:subs1], meta_v1.GetOptions{})
    if err == nil {
        return pod.Name[:subs1], StatefulSet
    }
    log.Debugf("Resource for pod %s is not stateful set %s: %v", pod.Name, pod.Name[:subs1], err)
    deps := client.AppsV1().Deployments(pod.Namespace)
    _, err = deps.Get(pod.Name[:subs2], meta_v1.GetOptions{})
    if err == nil {
        return pod.Name[:subs2], Deployment
    }
    log.Debugf("Resource for pod %s is not deployment %s: %v", pod.Name, pod.Name[:subs2], err)
    return "", Unrecognized
}

From the debug logs:

time="2019-03-29T19:02:59Z" level=debug msg=HandlerImpl.ObjectCreated
time="2019-03-29T19:02:59Z" level=debug msg="Resource for pod falco-wzdl4 is not stateful set falco: statefulsets.apps \"falco\" not found"
E0329 19:02:59.457975       1 runtime.go:69] Observed a panic: "slice bounds out of range" (runtime error: slice bounds out of range)

The falco-wzdl4 pod doesn't contain two dashes. Please advise on this issue.

iverberk commented 5 years ago

@DarthFennec are you going to add the daemonset option to the code and do you have a timeline for this? We are currently implementing this solution and like to move forward. Let me know if I can be of any help with this.

gbvanrenswoude commented 5 years ago

We need this, running into it as well.

DarthFennec commented 5 years ago

Commit b85af60f1346b0af66c417589618cf546a493525 should fix the error, now anything that has fewer than two dashes will return as unrecognized instead of panicking.

I am expecting to add daemonset support, but I'm not certain about the timeline at the moment. I'll let you know when I learn more.

rimusz commented 5 years ago

fix released in https://github.com/jfrog/kubexray/releases/tag/0.1.3 also the helm chart was updated too https://github.com/jfrog/charts/tree/master/stable/kubexray