tektoncd / pipeline

A cloud-native Pipeline resource.
https://tekton.dev
Apache License 2.0
8.47k stars 1.77k forks source link

Error: invalid memory address or nil pointer dereference #7249

Closed elalo-deloitte closed 1 year ago

elalo-deloitte commented 1 year ago

Expected Behavior

I have a pipeline task which uses a container with podman and oscap-podman. The task is supposed to load the container via a .tar file and then run an OSCAP scan.

Here's the actual script that gets run.

        podman image load -i $OUTPUT_DIR/oci-$IMAGE_OS_NAME.$IMAGE_OS_VERSION.$IMAGE_APP_NAME.$TAG.tar
        oscap-podman $IMAGE_OS_NAME/$IMAGE_OS_VERSION/$IMAGE_APP_NAME:$TAG xccdf eval \
        --report $OUTPUT_DIR/$TAG.beforeScan.html \
        --results $OUTPUT_DIR/$TAG.beforeScan.xml \
        --profile $OSCAP_PROFILE \
        --fetch-remote-resources /usr/share/xml/scap/ssg/content/$OSCAP_SSG

Actual Behavior

The task successfully loads the image but then errors out with

panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x556ec8948936]

Steps to Reproduce the Problem

  1. Run a task using a container with podman and oscap-podman
  2. Load an image from a tar file to scan.

Additional Info

Task Log

oci-redhat.ubi8.php.1.0.0.tar
Getting image source signatures
Copying blob sha256:d7a8bea30a8bc436eed3b41aab29bd189b72c56838e6296b56cd34b37d85c343
Copying blob sha256:8268b815a24672102db073391fffee83298e7b02d3078eaaaa97b810431c5616
Copying blob sha256:45221e4f431f97bf6cf8786366c0eeebc7f756604139c3e48896b656272b22ca
Copying blob sha256:bd9ddc54bea929a22b334e73e026d4136e5b73f5cc29942896c72e4ece69b13d
Copying blob sha256:bd9ddc54bea929a22b334e73e026d4136e5b73f5cc29942896c72e4ece69b13d
Copying blob sha256:bd9ddc54bea929a22b334e73e026d4136e5b73f5cc29942896c72e4ece69b13d
Copying blob sha256:0fdaad9d60349ab6fd65bf887dc2d8f5f17a690f6f34bb6d648ea81d7a99fd60
Copying blob sha256:794f40bd99fe4d5459aa0706c4e87ed28abecbca2f70bf89024c14c4d7ac4bfb
Copying blob sha256:a275c0aab72116ac7d5fa91392841b4357b81c3722a22641dad77780826e3f10
Copying blob sha256:30f9c3bfcea450c499104c4a6a0328c8373a23648c1950f0dbced4391ea5849d
Copying config sha256:c3818ac53f07f08da3bdb3de6d228cc9446e477089c262a59fc416b0f4924e4c
Writing manifest to image destination
Storing signatures
Loaded image: localhost/redhat/ubi8/php:1.0.0
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x556ec8948936]

goroutine 1 [running]:
panic({0x556ec91121e0, 0x556ec9f16370})
    /usr/lib/golang/src/runtime/panic.go:987 +0x3ba fp=0xc0004eb5f0 sp=0xc0004eb530 pc=0x556ec7a818ba
runtime.panicmem(...)
    /usr/lib/golang/src/runtime/panic.go:260
runtime.sigpanic()
    /usr/lib/golang/src/runtime/signal_unix.go:839 +0x2f6 fp=0xc0004eb640 sp=0xc0004eb5f0 pc=0x556ec7a98e36
github.com/containers/podman/libpod.(*storageService).MountContainerImage(0x0, {0xc000457a00?, 0x556ec8703290?})
    /builddir/build/BUILD/containers-podman-dd490ff/_build/src/github.com/containers/podman/libpod/storage.go:219 +0x36 fp=0xc0004eb6f8 sp=0xc0004eb640 pc=0x556ec8948936
github.com/containers/podman/libpod.(*Container).mount(0xc0007af000)
    /builddir/build/BUILD/containers-podman-dd490ff/_build/src/github.com/containers/podman/libpod/container_internal.go:2197 +0xe5 fp=0xc0004eb7d0 sp=0xc0004eb6f8 pc=0x556ec88a3be5
github.com/containers/podman/libpod.(*Container).Mount(0xc0007af000?)
    /builddir/build/BUILD/containers-podman-dd490ff/_build/src/github.com/containers/podman/libpod/container_api.go:439 +0xe5 fp=0xc0004eb848 sp=0xc0004eb7d0 pc=0x556ec8876ac5
github.com/containers/podman/pkg/domain/infra/abi.(*ContainerEngine).ContainerMount(0xc00012a068, {0x0?, 0x0?}, {0xc0003c63b0, 0x1, 0x0?}, {0x0, {0x0, 0x0}, 0x0, ...})
    /builddir/build/BUILD/containers-podman-dd490ff/_build/src/github.com/containers/podman/pkg/domain/infra/abi/containers.go:1347 +0x945 fp=0xc0004ebb68 sp=0xc0004eb848 pc=0x556ec89bcfa5
github.com/containers/podman/cmd/podman/containers.mount(0x556ec9f31fa0?, {0xc0003648d0, 0x1, 0x1?})
    /builddir/build/BUILD/containers-podman-dd490ff/_build/src/github.com/containers/podman/cmd/podman/containers/mount.go:90 +0x193 fp=0xc0004ebcd0 sp=0xc0004ebb68 pc=0x556ec8afaf13
github.com/containers/podman/vendor/github.com/spf13/cobra.(*Command).execute(0x556ec9f31fa0, {0xc00011a170, 0x1, 0x1})
    /builddir/build/BUILD/containers-podman-dd490ff/_build/src/github.com/containers/podman/vendor/github.com/spf13/cobra/command.go:916 +0x862 fp=0xc0004ebe08 sp=0xc0004ebcd0 pc=0x556ec7ffc4c2
github.com/containers/podman/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0x556ec9f53660)
    /builddir/build/BUILD/containers-podman-dd490ff/_build/src/github.com/containers/podman/vendor/github.com/spf13/cobra/command.go:1044 +0x3bd fp=0xc0004ebec0 sp=0xc0004ebe08 pc=0x556ec7ffcd3d
github.com/containers/podman/vendor/github.com/spf13/cobra.(*Command).Execute(...)
    /builddir/build/BUILD/containers-podman-dd490ff/_build/src/github.com/containers/podman/vendor/github.com/spf13/cobra/command.go:968
github.com/containers/podman/vendor/github.com/spf13/cobra.(*Command).ExecuteContext(...)
    /builddir/build/BUILD/containers-podman-dd490ff/_build/src/github.com/containers/podman/vendor/github.com/spf13/cobra/command.go:961
main.Execute()
    /builddir/build/BUILD/containers-podman-dd490ff/_build/src/github.com/containers/podman/cmd/podman/root.go:107 +0xcc fp=0xc0004ebf50 sp=0xc0004ebec0 pc=0x556ec8c16ccc
main.main()
    /builddir/build/BUILD/containers-podman-dd490ff/_build/src/github.com/containers/podman/cmd/podman/main.go:40 +0x7c fp=0xc0004ebf80 sp=0xc0004ebf50 pc=0x556ec8c1611c
runtime.main()
    /usr/lib/golang/src/runtime/proc.go:250 +0x213 fp=0xc0004ebfe0 sp=0xc0004ebf80 pc=0x556ec7a84653
runtime.goexit()
    /usr/lib/golang/src/runtime/asm_amd64.s:1594 +0x1 fp=0xc0004ebfe8 sp=0xc0004ebfe0 pc=0x556ec7ab7021

goroutine 2 [force gc (idle)]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
    /usr/lib/golang/src/runtime/proc.go:363 +0xd6 fp=0xc000064fb0 sp=0xc000064f90 pc=0x556ec7a84a16
runtime.goparkunlock(...)
    /usr/lib/golang/src/runtime/proc.go:369
runtime.forcegchelper()
    /usr/lib/golang/src/runtime/proc.go:302 +0xad fp=0xc000064fe0 sp=0xc000064fb0 pc=0x556ec7a848ad
runtime.goexit()
    /usr/lib/golang/src/runtime/asm_amd64.s:1594 +0x1 fp=0xc000064fe8 sp=0xc000064fe0 pc=0x556ec7ab7021
created by runtime.init.7
    /usr/lib/golang/src/runtime/proc.go:290 +0x25

goroutine 3 [GC sweep wait]:
runtime.gopark(0x1?, 0x0?, 0x0?, 0x0?, 0x0?)
    /usr/lib/golang/src/runtime/proc.go:363 +0xd6 fp=0xc000065790 sp=0xc000065770 pc=0x556ec7a84a16
runtime.goparkunlock(...)
    /usr/lib/golang/src/runtime/proc.go:369
runtime.bgsweep(0x0?)
    /usr/lib/golang/src/runtime/mgcsweep.go:297 +0xd7 fp=0xc0000657c8 sp=0xc000065790 pc=0x556ec7a6f217
runtime.gcenable.func1()
    /usr/lib/golang/src/runtime/mgc.go:178 +0x26 fp=0xc0000657e0 sp=0xc0000657c8 pc=0x556ec7a63e66
runtime.goexit()
    /usr/lib/golang/src/runtime/asm_amd64.s:1594 +0x1 fp=0xc0000657e8 sp=0xc0000657e0 pc=0x556ec7ab7021
created by runtime.gcenable
    /usr/lib/golang/src/runtime/mgc.go:178 +0x6b

goroutine 4 [GC scavenge wait]:
runtime.gopark(0xc00007c000?, 0x556ec8f323f0?, 0x0?, 0x0?, 0x0?)
    /usr/lib/golang/src/runtime/proc.go:363 +0xd6 fp=0xc000065f70 sp=0xc000065f50 pc=0x556ec7a84a16
runtime.goparkunlock(...)
    /usr/lib/golang/src/runtime/proc.go:369
runtime.(*scavengerState).park(0x556eca035620)
    /usr/lib/golang/src/runtime/mgcscavenge.go:389 +0x53 fp=0xc000065fa0 sp=0xc000065f70 pc=0x556ec7a6d273
runtime.bgscavenge(0x0?)
    /usr/lib/golang/src/runtime/mgcscavenge.go:622 +0x65 fp=0xc000065fc8 sp=0xc000065fa0 pc=0x556ec7a6d865
runtime.gcenable.func2()
    /usr/lib/golang/src/runtime/mgc.go:179 +0x26 fp=0xc000065fe0 sp=0xc000065fc8 pc=0x556ec7a63e06
runtime.goexit()
    /usr/lib/golang/src/runtime/asm_amd64.s:1594 +0x1 fp=0xc000065fe8 sp=0xc000065fe0 pc=0x556ec7ab7021
created by runtime.gcenable
    /usr/lib/golang/src/runtime/mgc.go:179 +0xaa

goroutine 18 [finalizer wait]:
runtime.gopark(0x556eca037640?, 0xc0001024e0?, 0x0?, 0x0?, 0xc000064770?)
    /usr/lib/golang/src/runtime/proc.go:363 +0xd6 fp=0xc000064628 sp=0xc000064608 pc=0x556ec7a84a16
runtime.goparkunlock(...)
    /usr/lib/golang/src/runtime/proc.go:369
runtime.runfinq()
    /usr/lib/golang/src/runtime/mfinal.go:180 +0x10f fp=0xc0000647e0 sp=0xc000064628 pc=0x556ec7a62eef
runtime.goexit()
    /usr/lib/golang/src/runtime/asm_amd64.s:1594 +0x1 fp=0xc0000647e8 sp=0xc0000647e0 pc=0x556ec7ab7021
created by runtime.createfing
    /usr/lib/golang/src/runtime/mfinal.go:157 +0x45

goroutine 19 [GC worker (idle)]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
    /usr/lib/golang/src/runtime/proc.go:363 +0xd6 fp=0xc000060750 sp=0xc000060730 pc=0x556ec7a84a16
runtime.gcBgMarkWorker()
    /usr/lib/golang/src/runtime/mgc.go:1235 +0xf1 fp=0xc0000607e0 sp=0xc000060750 pc=0x556ec7a65fd1
runtime.goexit()
    /usr/lib/golang/src/runtime/asm_amd64.s:1594 +0x1 fp=0xc0000607e8 sp=0xc0000607e0 pc=0x556ec7ab7021
created by runtime.gcBgMarkStartWorkers
    /usr/lib/golang/src/runtime/mgc.go:1159 +0x25

goroutine 5 [GC worker (idle)]:
runtime.gopark(0x0?, 0x0?, 0x0?, 0x0?, 0x0?)
    /usr/lib/golang/src/runtime/proc.go:363 +0xd6 fp=0xc000066750 sp=0xc000066730 pc=0x556ec7a84a16
runtime.gcBgMarkWorker()
    /usr/lib/golang/src/runtime/mgc.go:1235 +0xf1 fp=0xc0000667e0 sp=0xc000066750 pc=0x556ec7a65fd1
runtime.goexit()
    /usr/lib/golang/src/runtime/asm_amd64.s:1594 +0x1 fp=0xc0000667e8 sp=0xc0000667e0 pc=0x556ec7ab7021
created by runtime.gcBgMarkStartWorkers
    /usr/lib/golang/src/runtime/mgc.go:1159 +0x25

goroutine 6 [GC worker (idle)]:
runtime.gopark(0x458168b9f79?, 0x0?, 0x0?, 0x0?, 0x0?)
    /usr/lib/golang/src/runtime/proc.go:363 +0xd6 fp=0xc000066f50 sp=0xc000066f30 pc=0x556ec7a84a16
runtime.gcBgMarkWorker()
    /usr/lib/golang/src/runtime/mgc.go:1235 +0xf1 fp=0xc000066fe0 sp=0xc000066f50 pc=0x556ec7a65fd1
runtime.goexit()
    /usr/lib/golang/src/runtime/asm_amd64.s:1594 +0x1 fp=0xc000066fe8 sp=0xc000066fe0 pc=0x556ec7ab7021
created by runtime.gcBgMarkStartWorkers
    /usr/lib/golang/src/runtime/mgc.go:1159 +0x25

goroutine 7 [GC worker (idle)]:
runtime.gopark(0x458168fe70a?, 0x0?, 0x0?, 0x0?, 0x0?)
    /usr/lib/golang/src/runtime/proc.go:363 +0xd6 fp=0xc000067750 sp=0xc000067730 pc=0x556ec7a84a16
runtime.gcBgMarkWorker()
    /usr/lib/golang/src/runtime/mgc.go:1235 +0xf1 fp=0xc0000677e0 sp=0xc000067750 pc=0x556ec7a65fd1
runtime.goexit()
    /usr/lib/golang/src/runtime/asm_amd64.s:1594 +0x1 fp=0xc0000677e8 sp=0xc0000677e0 pc=0x556ec7ab7021
created by runtime.gcBgMarkStartWorkers
    /usr/lib/golang/src/runtime/mgc.go:1159 +0x25

goroutine 8 [select, locked to thread]:
runtime.gopark(0xc000067fa8?, 0x2?, 0x0?, 0x0?, 0xc000067fa4?)
    /usr/lib/golang/src/runtime/proc.go:363 +0xd6 fp=0xc000067e18 sp=0xc000067df8 pc=0x556ec7a84a16
runtime.selectgo(0xc000067fa8, 0xc000067fa0, 0x0?, 0x0, 0x0?, 0x1)
    /usr/lib/golang/src/runtime/select.go:328 +0x7bc fp=0xc000067f58 sp=0xc000067e18 pc=0x556ec7a94efc
runtime.ensureSigM.func1()
    /usr/lib/golang/src/runtime/signal_unix.go:995 +0x1b4 fp=0xc000067fe0 sp=0xc000067f58 pc=0x556ec7a99374
runtime.goexit()
    /usr/lib/golang/src/runtime/asm_amd64.s:1594 +0x1 fp=0xc000067fe8 sp=0xc000067fe0 pc=0x556ec7ab7021
created by runtime.ensureSigM
    /usr/lib/golang/src/runtime/signal_unix.go:978 +0xbd

goroutine 34 [syscall]:
runtime.notetsleepg(0x746e6f632061206d?, 0x65722072656e6961?)
    /usr/lib/golang/src/runtime/lock_futex.go:236 +0x34 fp=0xc00044e7a0 sp=0xc00044e768 pc=0x556ec7a561d4
os/signal.signal_recv()
    /usr/lib/golang/src/runtime/sigqueue.go:152 +0x2f fp=0xc00044e7c0 sp=0xc00044e7a0 pc=0x556ec7ab34cf
os/signal.loop()
    /usr/lib/golang/src/os/signal/signal_unix.go:23 +0x19 fp=0xc00044e7e0 sp=0xc00044e7c0 pc=0x556ec7e170b9
runtime.goexit()
    /usr/lib/golang/src/runtime/asm_amd64.s:1594 +0x1 fp=0xc00044e7e8 sp=0xc00044e7e0 pc=0x556ec7ab7021
created by os/signal.Notify.func1.1
    /usr/lib/golang/src/os/signal/signal.go:151 +0x2a

goroutine 35 [select]:
runtime.gopark(0xc00044efb0?, 0x2?, 0x69?, 0x6e?, 0xc00044eeac?)
    /usr/lib/golang/src/runtime/proc.go:363 +0xd6 fp=0xc000072d18 sp=0xc000072cf8 pc=0x556ec7a84a16
runtime.selectgo(0xc000072fb0, 0xc00044eea8, 0x746e6f632d6e6920?, 0x0, 0x6e61203020666f20?, 0x1)
    /usr/lib/golang/src/runtime/select.go:328 +0x7bc fp=0xc000072e58 sp=0xc000072d18 pc=0x556ec7a94efc
github.com/containers/podman/libpod/shutdown.Start.func1()
    /builddir/build/BUILD/containers-podman-dd490ff/_build/src/github.com/containers/podman/libpod/shutdown/handler.go:47 +0x93 fp=0xc000072fe0 sp=0xc000072e58 pc=0x556ec87d9eb3
runtime.goexit()
    /usr/lib/golang/src/runtime/asm_amd64.s:1594 +0x1 fp=0xc000072fe8 sp=0xc000072fe0 pc=0x556ec7ab7021
created by github.com/containers/podman/libpod/shutdown.Start
    /builddir/build/BUILD/containers-podman-dd490ff/_build/src/github.com/containers/podman/libpod/shutdown/handler.go:46 +0xe7

goroutine 36 [chan receive]:
runtime.gopark(0x2265746176697270?, 0x6363612065726120?, 0x65?, 0x70?, 0x70206c6174634f20?)
    /usr/lib/golang/src/runtime/proc.go:363 +0xd6 fp=0xc00044f6f0 sp=0xc00044f6d0 pc=0x556ec7a84a16
runtime.chanrecv(0xc000458540, 0xc00044f7c8, 0x1)
    /usr/lib/golang/src/runtime/chan.go:583 +0x49b fp=0xc00044f780 sp=0xc00044f6f0 pc=0x556ec7a50bfb
runtime.chanrecv2(0x2020230a2e656761?, 0x6574617669727022?)
    /usr/lib/golang/src/runtime/chan.go:447 +0x18 fp=0xc00044f7a8 sp=0xc00044f780 pc=0x556ec7a50738
github.com/containers/podman/libpod.(*Runtime).startWorker.func1()
    /builddir/build/BUILD/containers-podman-dd490ff/_build/src/github.com/containers/podman/libpod/runtime_worker.go:6 +0x74 fp=0xc00044f7e0 sp=0xc00044f7a8 pc=0x556ec89444f4
runtime.goexit()
    /usr/lib/golang/src/runtime/asm_amd64.s:1594 +0x1 fp=0xc00044f7e8 sp=0xc00044f7e0 pc=0x556ec7ab7021
created by github.com/containers/podman/libpod.(*Runtime).startWorker
    /builddir/build/BUILD/containers-podman-dd490ff/_build/src/github.com/containers/podman/libpod/runtime_worker.go:5 +0x96
Failed to mount.
  1. This process runs in Gitlab CI without issue.
  2. I'm running this locally with Kubernetes in Docker Desktop however the same issue was found with minikube.

Eventlistener

---
apiVersion: triggers.tekton.dev/v1beta1
kind: EventListener
metadata:
  name: listener
spec:
  serviceAccountName: tekton-triggers-example-sa
  triggers:
    - name: pipeline-trigger
      bindings:
        - ref: vars-binding
      template:
        ref: pipeline-template

---
apiVersion: triggers.tekton.dev/v1beta1
kind: TriggerBinding
metadata:
  name: vars-binding
spec:
  params:
    - name: repo-url
      value: $(body.repo-url)

---
apiVersion: triggers.tekton.dev/v1beta1
kind: TriggerTemplate
metadata:
  name: pipeline-template
spec:
  params:
    - name: repo-url
      description: The clone url for the source code.
  resourcetemplates:
    - apiVersion: tekton.dev/v1beta1
      kind: PipelineRun
      metadata:
        generateName: container-hardening-run-
      spec:
        pipelineRef:
          name: container-hardening-pipeline
        podTemplate:
          securityContext:
            fsGroup: 1000
            runAsUser: 1000
            runAsGroup: 1000
        params:
          - name: repo-url
            value: $(tt.params.repo-url)
        workspaces:
          - name: shared-data
            volumeClaimTemplate:
              spec:
                accessModes:
                  - ReadWriteMany
                resources:
                  requests:
                    storage: 1Gi
          - name: git-credentials
            secret:
              secretName: git-credentials

Pipeline

apiVersion: tekton.dev/v1beta1
kind: Pipeline
metadata:
  name: container-hardening-pipeline
spec:
  description: |
    This pipeline clones a git repo, then echoes the README file to the stout.
  params:
    - name: repo-url
      type: string
      description: The git repo URL to clone from.
  workspaces:
    - name: shared-data
      description: |
        This workspace contains the cloned repo files, so they can be read by the
        next task.
    - name: git-credentials
      description: My ssh credentials
  tasks:
    #
    # Task One
    #
    - name: fetch-source
      taskRef:
        name: git-clone
      workspaces:
        # The following workspace names are required and cannot change if using the tekton provided tempalte.
        - name: output
          workspace: shared-data
        - name: ssh-directory
          workspace: git-credentials
      params:
        - name: url
          value: $(params.repo-url)

    #
    # Task Two
    #
    - name: build-image
      runAfter: ["fetch-source"]
      taskRef:
        name: build-container
      workspaces:
        - name: output
          workspace: shared-data
    #
    # Task Three
    #
    - name: oscap-scan
      runAfter: ["build-image"]
      taskRef:
        name: oscap-scan
      workspaces:
        - name: output
          workspace: shared-data

Task

apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
  name: oscap-scan
spec:
  description: Scans the image with oscap.
  workspaces:
    - name: output
  params:
    - name: outputDir
      description: Directory to store output files.
      type: string
      default: "outputs"
    - name: imageOSName
      description: Image OS.
      type: string
      default: "redhat"
    - name: imageOSVersion
      description: Image OS version.
      type: string
      default: "ubi8"
    - name: imageAppName
      description: Image application name.
      type: string
      default: "php"
    - name: tag
      description: Image tag.
      type: string
      default: "1.0.0"
    - name: image
      description: The image providing the python binary that this Task runs.
      type: string
      default: "our internal nexus repo oscap image"
  results:
    - name: out
      description: Test

  steps:
    - name: scan
      image: "$(params.image)"
      env:
        - name: OUTPUT_DIR
          value: "$(workspaces.output.path)/$(params.outputDir)"
        - name: IMAGE_OS_NAME
          value: "$(params.imageOSName)"
        - name: IMAGE_OS_VERSION
          value: "$(params.imageOSVersion)"
        - name: IMAGE_APP_NAME
          value: "$(params.imageAppName)"
        - name: TAG
          value: "$(params.tag)"
        - name: OSCAP_PROFILE
          value: "stig"
        - name: OSCAP_SSG
          value: "ssg-rhel8-ds-1.2.xml"
      securityContext:
        runAsUser: 0
      script: |
        #!/usr/bin/env sh
        ls $OUTPUT_DIR
        podman image load -i $OUTPUT_DIR/oci-$IMAGE_OS_NAME.$IMAGE_OS_VERSION.$IMAGE_APP_NAME.$TAG.tar
        oscap-podman $IMAGE_OS_NAME/$IMAGE_OS_VERSION/$IMAGE_APP_NAME:$TAG xccdf eval \
        --report $OUTPUT_DIR/$TAG.beforeScan.html \
        --results $OUTPUT_DIR/$TAG.beforeScan.xml \
        --profile $OSCAP_PROFILE \
        --fetch-remote-resources /usr/share/xml/scap/ssg/content/$OSCAP_SSG

Thanks!

vdemeester commented 1 year ago

@elalo-deloitte Thanks for the issue. The "panic" happened in the task, making it fail right ? If that's the case, this is most likely an oscpa-podman issue, and would need to be reported in the corresponding issue tracker (which, I think, is https://github.com/OpenSCAP/openscap/issues).

elalo-deloitte commented 1 year ago

@vdemeester thanks for the reply. I wondered that too, but this same container and overall process runs fine in GitLab CI and in my local docker environment.

That makes me think it's related to Tekton.

vdemeester commented 1 year ago

@elalo-deloitte it's probably related on the permission you have on the TaskRun, but it doesn't relate directly to Tekton and tektoncd/pipeline.

Depending on your platform, by default, the Pod (and thus the TaskRun) might run as randomuid, or as root but with some remove privileges, … And depending on that, the oscap-podman might fail. Depending on where gitlab-ci runs (a VM, a privileged container, …), it probably has more privileges and thus it succeeds here. For example, to get podman to run in a container, in kubernetes, it needs a set of privileges (SET_FCAP, …, allowPriligedEscalation) to run successfully, and those privileges are usually disabled by default. It is probably expected that it fails, but oscap-podman should probably not panic and give you a human-readable message instead (explaining what's missing, so you can figure out what to do to make it run successfully). Hence my recommandation on filling a bug on https://github.com/OpenSCAP/openscap/issues 👼🏼.

elalo-deloitte commented 1 year ago

Understood, thanks for info!