kubeshop / testkube

☸️ Kubernetes-native testing framework for test execution and orchestration
https://testkube.io
Other
1.37k stars 134 forks source link

Unable to set custom jobTemplate for Test #3562

Closed AndrewUnderwoodAtFanatics closed 1 year ago

AndrewUnderwoodAtFanatics commented 1 year ago

Describe the bug I'm attempting to follow the docs on how to change the job template for a test to create a Test in my cluster with a custom jobTemplate. Ultimately I want to do this so I can get an idea of what my Test CRD should look like in order to set a custom jobTemplate with higher memory limits for the Job pod.

I've followed the doc to create the provided cypress Test, here is the CRD the cli command has created in my cluster:

apiVersion: tests.testkube.io/v3
kind: Test
metadata:
  name: template-test
  namespace: testkube
spec:
  type: cypress/project
  content:
    type: git
    repository:
      type: git
      uri: https://github.com/kubeshop/testkube-example-cypress-project.git
      branch: main
      path: cypress
  executionRequest:
    jobTemplate: "apiVersion: batch/v1\nkind: Job\nspec:\n  template:\n    spec:\n      containers:\n        - name: { { .Name } }\n          image: { { .Image } }\n          imagePullPolicy: Always\n          command:\n            - \"/bin/runner\"\n            - \"{{ .Jsn }}\"\n          volumeMounts:\n            - name: data-volume\n              mountPath: /data\n          resources:\n            limits:\n              memory: 128Mi\n"

However, when I execute this Test through the webUI, dashboard gets the following 500 error back from the testkube API server:

test execution failed: merging job spec templates: yaml: line 10: did not find expected '-' indicator, context: null
image

In the testkube-api-server logs I see the following stacktrace:

github.com/kubeshop/testkube/pkg/server.(*HTTPServer).Error
    /build/pkg/server/httpserver.go:76
github.com/kubeshop/testkube/internal/app/api/v1.(*TestkubeAPI).ExecuteTestsHandler.func1
    /build/internal/app/api/v1/executions.go:96
github.com/gofiber/fiber/v2.(*App).next
    /go/pkg/mod/github.com/gofiber/fiber/v2@v2.39.0/router.go:132
github.com/gofiber/fiber/v2.(*Ctx).Next
    /go/pkg/mod/github.com/gofiber/fiber/v2@v2.39.0/ctx.go:945
github.com/kubeshop/testkube/internal/app/api/v1.(*TestkubeAPI).AuthHandler.func1
    /build/internal/app/api/v1/handlers.go:47
github.com/gofiber/fiber/v2.(*Ctx).Next
    /go/pkg/mod/github.com/gofiber/fiber/v2@v2.39.0/ctx.go:942
github.com/gofiber/fiber/v2/middleware/cors.New.func1
    /go/pkg/mod/github.com/gofiber/fiber/v2@v2.39.0/middleware/cors/cors.go:141
github.com/gofiber/fiber/v2.(*App).next
    /go/pkg/mod/github.com/gofiber/fiber/v2@v2.39.0/router.go:132
github.com/gofiber/fiber/v2.(*Ctx).Next
    /go/pkg/mod/github.com/gofiber/fiber/v2@v2.39.0/ctx.go:945
github.com/gofiber/fiber/v2/middleware/pprof.New.func1
    /go/pkg/mod/github.com/gofiber/fiber/v2@v2.39.0/middleware/pprof/pprof.go:41
github.com/gofiber/fiber/v2.(*Ctx).Next
    /go/pkg/mod/github.com/gofiber/fiber/v2@v2.39.0/ctx.go:942
github.com/kubeshop/testkube/pkg/server.(*HTTPServer).Init.func1
    /build/pkg/server/httpserver.go:46
github.com/gofiber/fiber/v2.(*App).next
    /go/pkg/mod/github.com/gofiber/fiber/v2@v2.39.0/router.go:132
github.com/gofiber/fiber/v2.(*Ctx).Next
    /go/pkg/mod/github.com/gofiber/fiber/v2@v2.39.0/ctx.go:945
github.com/gofiber/fiber/v2/middleware/pprof.New.func1
    /go/pkg/mod/github.com/gofiber/fiber/v2@v2.39.0/middleware/pprof/pprof.go:41
github.com/gofiber/fiber/v2.(*Ctx).Next
    /go/pkg/mod/github.com/gofiber/fiber/v2@v2.39.0/ctx.go:942
github.com/kubeshop/testkube/pkg/server.(*HTTPServer).Init.func1
    /build/pkg/server/httpserver.go:46
github.com/gofiber/fiber/v2.(*App).next
    /go/pkg/mod/github.com/gofiber/fiber/v2@v2.39.0/router.go:132
github.com/gofiber/fiber/v2.(*App).handler
    /go/pkg/mod/github.com/gofiber/fiber/v2@v2.39.0/router.go:159
github.com/valyala/fasthttp.(*Server).serveConn
    /go/pkg/mod/github.com/valyala/fasthttp@v1.44.0/server.go:2372
github.com/valyala/fasthttp.(*workerPool).workerFunc
    /go/pkg/mod/github.com/valyala/fasthttp@v1.44.0/workerpool.go:224
github.com/valyala/fasthttp.(*workerPool).getCh.func1
    /go/pkg/mod/github.com/valyala/fasthttp@v1.44.0/workerpool.go:196

To Reproduce Steps to reproduce the behavior:

  1. Insert the above Test CRD into a cluster
  2. Run the test
  3. See error

Expected behavior Is there a more correct syntax for specifying a custom jobTemplate in a Test CRD or is there a bug at play here?

Version / Cluster I'm running version 1.9.231 of the testkube helm chart

AndrewUnderwoodAtFanatics commented 1 year ago

Awesome! This is working for me now. For posterities sake, here's what a Test CRD with a custom jobTemplate looks like.

apiVersion: tests.testkube.io/v3
kind: Test
metadata:
  name: template-test
  namespace: testkube
spec:
  type: cypress/project
  content:
    type: git
    repository:
      type: git
      uri: https://github.com/kubeshop/testkube-example-cypress-project.git
      branch: main
      path: cypress
  executionRequest:
    jobTemplate: "apiVersion: batch/v1\nkind: Job\nspec:\n  template:\n    spec:\n      containers:\n        - name: {{ .Name }}\n          image: {{ .Image }}\n          imagePullPolicy: Always\n          command:\n            - \"/bin/runner\"\n            - '{{ .Jsn }}'\n          resources:\n            limits:\n              memory: 128Mi\n"

If you're doing a GitOps process and store CRDs in Git, you can store the CRD using a multiline for the jobTemplate like this to improve readability:

apiVersion: tests.testkube.io/v3
kind: Test
metadata:
  name: template-test
  namespace: testkube
spec:
  type: cypress/project
  content:
    type: git
    repository:
      type: git
      uri: https://github.com/kubeshop/testkube-example-cypress-project.git
      branch: main
      path: cypress
  executionRequest:
    jobTemplate: |
      apiVersion: batch/v1
      kind: Job
      spec:
        template:
          spec:
            containers:
              - name: {{ .Name }}
                image: {{ .Image }}
                imagePullPolicy: Always
                command:
                  - "/bin/runner"
                  - '{{ .Jsn }}'
                resources:
                  limits:
                    memory: 128Mi

However, the K8s API server won't return the CRD in this multiline format once inserted into the cluster. It condenses the jobTemplate field into a single line string like the first example.

vsukhin commented 1 year ago

you can create your test using cli as in example and provide --crd-only flag. It will print you CRD version of the test