Closed maorkuriel closed 3 years ago
@maorkuriel, are you having this issue off of the main branch? because this should've been resolved.
Yes, @joshualucas84, this is from the main branch. I cloned it two hours ago and ran the scripts.
Roger that, @maorkuriel could you run in the same directory helm upgrade -i demo image-verification --values image-verification/values.yaml --debug
and post the results?
Here you go @joshualucas84
kubernetes/tekton-resources/demo on main [!?] ❯ helm upgrade -i demo image-verification --values image-verification/values.yaml --debug history.go:56: [debug] getting history for release demo Release "demo" does not exist. Installing it now. install.go:178: [debug] Original chart version: "" install.go:199: [debug] CHART PATH: /Users/maorkuriel/Documents/GitHub/ssf/kubernetes/tekton-resources/demo/image-verification
Error: rendered manifests contain a resource that already exists. Unable to continue with install: Task "kaniko" in namespace "default" exists and cannot be imported into the current release: invalid ownership metadata; label validation error: missing key "app.kubernetes.io/managed-by": must be set to "Helm"; annotation validation error: missing key "meta.helm.sh/release-name": must be set to "demo"; annotation validation error: missing key "meta.helm.sh/release-namespace": must be set to "default" helm.go:88: [debug] Task "kaniko" in namespace "default" exists and cannot be imported into the current release: invalid ownership metadata; label validation error: missing key "app.kubernetes.io/managed-by": must be set to "Helm"; annotation validation error: missing key "meta.helm.sh/release-name": must be set to "demo"; annotation validation error: missing key "meta.helm.sh/release-namespace": must be set to "default" rendered manifests contain a resource that already exists. Unable to continue with install helm.sh/helm/v3/pkg/action.(Install).RunWithContext helm.sh/helm/v3/pkg/action/install.go:295 main.runInstall helm.sh/helm/v3/cmd/helm/install.go:265 main.newUpgradeCmd.func2 helm.sh/helm/v3/cmd/helm/upgrade.go:124 github.com/spf13/cobra.(Command).execute github.com/spf13/cobra@v1.2.1/command.go:856 github.com/spf13/cobra.(Command).ExecuteC github.com/spf13/cobra@v1.2.1/command.go:974 github.com/spf13/cobra.(Command).Execute github.com/spf13/cobra@v1.2.1/command.go:902 main.main helm.sh/helm/v3/cmd/helm/helm.go:87 runtime.main runtime/proc.go:255 runtime.goexit runtime/asm_arm64.s:1133
kubernetes/tekton-resources/demo on main [!?] took 2s ❯
@maorkuriel , could you try the changes in this PR to see if it addresses your issues, https://github.com/thesecuresoftwarefactory/ssf/pull/28
Hi @joshualucas84 , it didn't work, here is the output:
kubernetes/tekton-resources/demo on main [!?] took 11s ❯ ./update-and-run.sh release "gatekeeper-template" uninstalled Error: uninstall: Release not loaded: demo: release: not found Release "gatekeeper-template" does not exist. Installing it now. NAME: gatekeeper-template LAST DEPLOYED: Mon Nov 1 15:19:29 2021 NAMESPACE: default STATUS: deployed REVISION: 1 TEST SUITE: None Wait 5 seconds for gatekeeper-constraint-templates to become available Release "demo" does not exist. Installing it now. Error: rendered manifests contain a resource that already exists. Unable to continue with install: Task "kaniko" in namespace "default" exists and cannot be imported into the current release: invalid ownership metadata; label validation error: missing key "app.kubernetes.io/managed-by": must be set to "Helm"; annotation validation error: missing key "meta.helm.sh/release-name": must be set to "demo"; annotation validation error: missing key "meta.helm.sh/release-namespace": must be set to "default"
kubernetes/tekton-resources/demo on main [!?] took 11s ❯ helm upgrade -i demo image-verification --values image-verification/values.yaml --debug history.go:56: [debug] getting history for release demo Release "demo" does not exist. Installing it now. install.go:178: [debug] Original chart version: "" install.go:199: [debug] CHART PATH: /Users/maorkuriel/Documents/GitHub/ssf/kubernetes/tekton-resources/demo/image-verification
Error: rendered manifests contain a resource that already exists. Unable to continue with install: Task "kaniko" in namespace "default" exists and cannot be imported into the current release: invalid ownership metadata; label validation error: missing key "app.kubernetes.io/managed-by": must be set to "Helm"; annotation validation error: missing key "meta.helm.sh/release-name": must be set to "demo"; annotation validation error: missing key "meta.helm.sh/release-namespace": must be set to "default" helm.go:88: [debug] Task "kaniko" in namespace "default" exists and cannot be imported into the current release: invalid ownership metadata; label validation error: missing key "app.kubernetes.io/managed-by": must be set to "Helm"; annotation validation error: missing key "meta.helm.sh/release-name": must be set to "demo"; annotation validation error: missing key "meta.helm.sh/release-namespace": must be set to "default" rendered manifests contain a resource that already exists. Unable to continue with install helm.sh/helm/v3/pkg/action.(Install).RunWithContext helm.sh/helm/v3/pkg/action/install.go:295 main.runInstall helm.sh/helm/v3/cmd/helm/install.go:265 main.newUpgradeCmd.func2 helm.sh/helm/v3/cmd/helm/upgrade.go:124 github.com/spf13/cobra.(Command).execute github.com/spf13/cobra@v1.2.1/command.go:856 github.com/spf13/cobra.(Command).ExecuteC github.com/spf13/cobra@v1.2.1/command.go:974 github.com/spf13/cobra.(Command).Execute github.com/spf13/cobra@v1.2.1/command.go:902 main.main helm.sh/helm/v3/cmd/helm/helm.go:87 runtime.main runtime/proc.go:255 runtime.goexit runtime/asm_arm64.s:1133
kubernetes/tekton-resources/demo on main [!?] took 2s ❯
@maorkuriel , Sorry for all the trouble I was running into this issue with the pipelinerun.yaml and pipeline.yaml, ive never seen it with the kaniko task yaml. Could you run the following command for me and upload the result helm version
and helm template demo image-verification --values image-verification/values.yaml
?
Hi @joshualucas84 No problem, here is the output:
apiVersion: v1 kind: ServiceAccount metadata: name: build-bot labels: meta.helm.sh/release-name: demo meta.helm.sh/release-namespace: default app.kubernetes.io/instance: demo app.kubernetes.io/managed-by: Helm imagePullSecrets:
secrets:
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: kaniko-source-pvc spec: accessModes:
apiVersion: v1 kind: Service metadata: name: gatekeeper-signing-checker-service namespace: gatekeeper spec: selector: app: gatekeeper-signing-checker ports:
apiVersion: v1 kind: Service metadata: labels: name: gatekeeper-signing-checker name: gatekeeper-signing-checker namespace: gatekeeper spec: internalTrafficPolicy: Cluster ipFamilies:
apiVersion: apps/v1 kind: Deployment metadata: name: gatekeeper-signing-checker namespace: gatekeeper labels: name: gatekeeper-signing-checker spec: selector: matchLabels: run: gatekeeper-signing-checker replicas: 1 template: metadata: labels: run: gatekeeper-signing-checker spec: imagePullSecrets:
apiVersion: mutations.gatekeeper.sh/v1alpha1 kind: AssignMetadata metadata: name: annotation-salsa spec: match: scope: Namespaced kinds:
apiVersion: kyverno.io/v1 kind: ClusterPolicy metadata: name: verify-image annotations: policies.kyverno.io/title: Verify Image policies.kyverno.io/category: Sample policies.kyverno.io/severity: medium policies.kyverno.io/subject: Pod policies.kyverno.io/minversion: 1.4.2 policies.kyverno.io/description: >- Using the Cosign project, OCI images may be signed to ensure supply chain security is maintained. Those signatures can be verified before pulling into a cluster. This policy checks the signature of an image repo called ghcr.io/kyverno/test-verify-image to ensure it has been signed by verifying its signature against the provided public key. This policy serves as an illustration for how to configure a similar rule and will require replacing with your image(s) and keys. spec: validationFailureAction: enforce background: false rules:
apiVersion: constraints.gatekeeper.sh/v1beta1 kind: K8sRequiredAttestations metadata: name: image-must-have-valid-slsa-attestations labels: meta.helm.sh/release-name: demo meta.helm.sh/release-namespace: default app.kubernetes.io/instance: demo app.kubernetes.io/managed-by: Helm spec: enforcementAction: deny match: kinds:
apiVersion: constraints.gatekeeper.sh/v1beta1 kind: K8sRequiredSignatures metadata: name: image-must-have-signature labels: meta.helm.sh/release-name: demo meta.helm.sh/release-namespace: default app.kubernetes.io/instance: demo app.kubernetes.io/managed-by: Helm spec: enforcementAction: deny match: kinds:
apiVersion: tekton.dev/v1beta1 kind: Pipeline metadata: name: kaniko-cargo-pipeline spec: workspaces:
apiVersion: tekton.dev/v1beta1 kind: PipelineRun metadata: name: kaniko-cargo-pipeline-run-1 labels: meta.helm.sh/release-name: demo meta.helm.sh/release-namespace: default app.kubernetes.io/instance: demo app.kubernetes.io/managed-by: Helm spec: serviceAccountName: build-bot pipelineRef: name: kaniko-cargo-pipeline params:
value: ttl.sh/foo-kaniko-chains-demo:1h
workspaces:
apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
name: kaniko
labels:
app.kubernetes.io/version: "0.5"
meta.helm.sh/release-name: demo
meta.helm.sh/release-namespace: default
app.kubernetes.io/instance: demo
app.kubernetes.io/managed-by: Helm
annotations:
tekton.dev/pipelines.minVersion: "0.17.0"
tekton.dev/categories: Image Build
tekton.dev/displayName: "Build and upload container image using Kaniko"
tekton.dev/platforms: "linux/amd64"
spec:
description: >-
This Task builds source into a container image using Google's kaniko tool.
Kaniko doesn't depend on a Docker daemon and executes each command within a Dockerfile completely in userspace. This enables building container images in environments that can't easily or securely run a Docker daemon, such as a standard Kubernetes cluster.
params:
config.json
optional: true
mountPath: /kaniko/.docker
results:name: IMAGE_URL description: URL of the image just built.
steps:
image: gcr.io/projectsigstore/cosign:v1.2.1@sha256:68801416e6ae0a48820baa3f071146d18846d8cd26ca8ec3a1e87fca8a735498 args:
set -e
IMAGE_DIGEST=cat $(results.IMAGE_DIGEST.path) | sed s/:/-/
echo "Uploading: cosign upload blob -f $(workspaces.sboms.path)/bom-sources.json $(params.IMAGE):$IMAGE_DIGEST.sbom.sources"
cosign upload blob -f $(workspaces.sboms.path)/bom-sources.json $(params.IMAGE):$IMAGE_DIGEST.sbom.sources
echo "Signing: cosign sign blob -key $(workspaces.sbom-key.path)/cosign.key $(params.IMAGE):$IMAGE_DIGEST.sbom.sources"
cosign sign -key $(workspaces.sbom-key.path)/cosign.key $(params.IMAGE):$IMAGE_DIGEST.sbom.sources
kubernetes/tekton-resources/demo on main [!?]
❯ helm version
version.BuildInfo{Version:"v3.7.1", GitCommit:"1d11fcb5d3f3bf00dbe6fe31b8412839a96b3dc4", GitTreeState:"clean", GoVersion:"go1.17.2"}
kubernetes/tekton-resources/demo on main [!?] ❯
@maorkuriel run the following command kubectl delete -f https://raw.githubusercontent.com/tektoncd/catalog/v1beta1/kaniko/kaniko.yaml
and then the ./update-and-run.sh
and that should fix your issues for the install. Also https://github.com/thesecuresoftwarefactory/ssf/pull/19/files addresses this issue.
Thanks @joshualucas84 , your last comment fixed the issue.
I am now faced with a new one, the pipeline fails to run and reports the following: CouldntCreateAffinityAssistantStatefulSet Failed to create StatefulSet for PipelineRun default/kaniko-cargo-pipeline-run-1 correctly: failed to create StatefulSet affinity-assistant-13a1e351e2: Internal error occurred: failed calling webhook "mutate.kyverno.svc-fail": failed to call webhook: Post "https://kyverno-svc.kyverno.svc:443/mutate?timeout=10s": EOF
If I follow the original demo flow this should not happen, can you help me with this issue?
Thanks for all your help, I appreciate your time.
@maorkuriel There was an issue in kyverno that was recently fixed. Try replacing the following lines in 30-kyverno-setup.sh
:
# Assumes helm already installed by previous scripts
helm repo add kyverno https://kyverno.github.io/kyverno/
helm repo update
helm upgrade --install kyverno kyverno/kyverno -n kyverno --create-namespace --set extraArgs="{--webhooktimeout=15,--imagePullSecrets=regcred}"
with
kubectl apply -f https://raw.githubusercontent.com/kyverno/kyverno/main/config/release/install.yaml
thank you @bradbeck , looks like this solved the issue. I will keep working with the demo env. One last question, now when I run the script I see the kaniko-cargo-pipeline-run-1 pipeline pending on the 2nd step right after it completed the fetch-reposetory step.
Looks like another issue ... when I look at the TaskRun tab I see pod status "Initialized":"False"; message: "containers with incomplete status: [place-tools place-scripts working-dir-initializer]"
closing this ticket as a new issue was opened which address this https://github.com/thesecuresoftwarefactory/ssf/issues/32
Question I am running the ./update-and-run.sh script and failing with the following error:
kubernetes/tekton-resources/demo on main [!?] took 2s ❯ ./update-and-run.sh
Error: uninstall: Release not loaded: gatekeeper-template: release: not found Error: uninstall: Release not loaded: demo: release: not found Release "gatekeeper-template" does not exist. Installing it now. NAME: gatekeeper-template LAST DEPLOYED: Mon Nov 1 12:54:09 2021 NAMESPACE: default STATUS: deployed REVISION: 1 TEST SUITE: None Wait 5 seconds for gatekeeper-constraint-templates to become available Release "demo" does not exist. Installing it now. Error: rendered manifests contain a resource that already exists. Unable to continue with install: Task "kaniko" in namespace "default" exists and cannot be imported into the current release: invalid ownership metadata; label validation error: missing key "app.kubernetes.io/managed-by": must be set to "Helm"; annotation validation error: missing key "meta.helm.sh/release-name": must be set to "demo"; annotation validation error: missing key "meta.helm.sh/release-namespace": must be set to "default"