confidential-containers / cloud-api-adaptor

Ability to create Kata pods using cloud provider APIs aka the peer-pods approach
Apache License 2.0
48 stars 88 forks source link

versions: Bump components to match kata 3.9.0 #2036

Closed stevenhorsman closed 1 month ago

stevenhorsman commented 2 months ago
stevenhorsman commented 2 months ago

FYI - I saw the nginx deploy test pass, so removed the timeout increase commit and it's back failing. Locally it is fine too:

=== RUN   TestLibvirtCreateNginxDeployment/Nginx_image_deployment_test/Access_for_nginx_deployment_test
=== NAME  TestLibvirtCreateNginxDeployment/Nginx_image_deployment_test
    nginx_deployment.go:136: Deleting webserver deployment...
    nginx_deployment.go:141: Deleting deployment nginx-deployment...
    nginx_deployment.go:148: Deployment nginx-deployment has been successfully deleted within 120s
--- PASS: TestLibvirtCreateNginxDeployment (147.60s)
    --- PASS: TestLibvirtCreateNginxDeployment/Nginx_image_deployment_test (147.60s)

The test is becoming a problem that is slowing PRs down a lot :(

stevenhorsman commented 2 months ago

I'm putting a hold on this as the KBS test is failing, which is very relevant to the PR changes. I'll try and debug it locally to see if I can uncover anything 🤞

stevenhorsman commented 2 months ago

Locally all the tests pass (including the CI skipped ones) except the KBS release test:

--- FAIL: TestLibvirtKbsKeyRelease (220.56s)
    --- PASS: TestLibvirtKbsKeyRelease/DoTestKbsKeyReleaseForFailure_test (110.16s)
        --- PASS: TestLibvirtKbsKeyRelease/DoTestKbsKeyReleaseForFailure_test/Kbs_key_release_is_failed (5.12s)
    --- FAIL: TestLibvirtKbsKeyRelease/KbsKeyReleasePod_test (110.18s)
        --- FAIL: TestLibvirtKbsKeyRelease/KbsKeyReleasePod_test/Kbs_key_release_is_successful (5.12s)
FAIL
FAIL    github.com/confidential-containers/cloud-api-adaptor/src/cloud-api-adaptor/test/e2e 2186.679s
FAIL

Looking at the log of the KBS it seems like the GET request was never made to it?

[2024-09-24T11:11:40Z INFO  kbs] Using config file /etc/kbs/kbs-config.toml
[2024-09-24T11:11:40Z WARN  attestation_service::rvps] No RVPS address provided and will launch a built-in rvps
[2024-09-24T11:11:40Z INFO  attestation_service::token::simple] No Token Signer key in config file, create an ephemeral key and without CA pubkey cert
[2024-09-24T11:11:40Z INFO  kbs] Starting HTTP server at [0.0.0.0:8080]
[2024-09-24T11:11:40Z INFO  actix_server::builder] starting 4 workers
[2024-09-24T11:11:40Z INFO  actix_server::server] Tokio runtime found; starting in existing Tokio runtime
[2024-09-24T13:07:50Z ERROR kbs::http::error] Set secret failed: write local fs
[2024-09-24T13:07:50Z INFO  actix_web::middleware::logger] 10.244.1.1 "POST /kbs/v0/resource/reponame/workload_key/key.bin HTTP/1.1" 401 125 "-" "kbs-client/0.1.0" 0.002763
[2024-09-24T13:07:50Z INFO  actix_web::middleware::logger] 10.244.1.1 "POST /kbs/v0/resource-policy HTTP/1.1" 200 0 "-" "kbs-client/0.1.0" 0.003217
[2024-09-24T13:07:50Z INFO  actix_web::middleware::logger] 10.244.1.1 "POST /kbs/v0/attestation-policy HTTP/1.1" 200 0 "-" "kbs-client/0.1.0" 0.000831
[2024-09-24T13:13:20Z INFO  actix_web::middleware::logger] 10.244.1.1 "POST /kbs/v0/attestation-policy HTTP/1.1" 200 0 "-" "kbs-client/0.1.0" 0.001487

The set secret did have an error though, so I'm not sure if that is also related?

stevenhorsman commented 2 months ago

When I use the kbs-client manually I also see the error:

./kbs-client --url http://192.168.122.146:30488 config --auth-private-key ../../kbs/config/kubernetes/base/kbs.key set-resource --path reponame/workload_key/key.bin --resource-file ../../kbs/config/kubernetes/overlays/x86_64/key.bin
Error: Request Failed, Response: "{\"type\":\"https://github.com/confidential-containers/kbs/errors/SetSecretFailed\",\"detail\":\"Set secret failed: write local fs\"}"
stevenhorsman commented 2 months ago

When I use the kbs-client manually I also see the error:

./kbs-client --url http://192.168.122.146:30488 config --auth-private-key ../../kbs/config/kubernetes/base/kbs.key set-resource --path reponame/workload_key/key.bin --resource-file ../../kbs/config/kubernetes/overlays/x86_64/key.bin
Error: Request Failed, Response: "{\"type\":\"https://github.com/confidential-containers/kbs/errors/SetSecretFailed\",\"detail\":\"Set secret failed: write local fs\"}"

After over an hour of debugging it looks like the set secret error is a red-herring and not the cause of the issue 😖. It might be something we consider changing anyway

stevenhorsman commented 2 months ago

There is maybe 3 hours down the drain, but I've ended up with https://github.com/confidential-containers/cloud-api-adaptor/pull/2055 to refactor the tests and avoid the error, but even with this code it's still failing for me!

stevenhorsman commented 2 months ago

Ok, through manual testing I'm managed to track down the problem with the failing test. With the 0.10.0 guest-components we can't talk to the cdh from the workload

# kubectl exec --stdin --tty kbs-key-release -n coco-pp-e2e-test-9a681be7 -- wget -q -O- http://127.0.0.1:8006/cdh/resource/reponame/workload_key/key.bin
wget: can't connect to remote host (127.0.0.1): Connection refused
command terminated with exit code 1

vs with our current version of guest-components (that is 2 months old):

# kubectl exec --stdin --tty kbs-key-release -n coco-pp-e2e-test-af08b3f2 -- wget -q -O- http://127.0.0.1:8006/cdh/resource/reponame/workload_key/key.bin
This is my cluster name:

I'm not sure of the reason for the change though

mkulke commented 2 months ago

Ok, through manual testing I'm managed to track down the problem with the failing test. With the 0.10.0 guest-components we can't talk to the cdh from the workload

# kubectl exec --stdin --tty kbs-key-release -n coco-pp-e2e-test-9a681be7 -- wget -q -O- http://127.0.0.1:8006/cdh/resource/reponame/workload_key/key.bin
wget: can't connect to remote host (127.0.0.1): Connection refused
command terminated with exit code 1

vs with our current version of guest-components (that is 2 months old):

# kubectl exec --stdin --tty kbs-key-release -n coco-pp-e2e-test-af08b3f2 -- wget -q -O- http://127.0.0.1:8006/cdh/resource/reponame/workload_key/key.bin
This is my cluster name:

I'm not sure of the reason for the change though

there should be a hint of what's going wrong in the log journals of those services

mkulke commented 2 months ago

note: there is also a protocol change so GC 0.9 will not be able to talk to KBS 0.10 afaik

stevenhorsman commented 2 months ago

there should be a hint of what's going wrong in the log journals of those services

Sorry it took some time for me to get the ssh-able podvm image working. There is nothing in the cdh journal as it didn't start properly:

confidential-data-hub.service - Confidential Data Hub TTRPC API Server
     Loaded: loaded (/etc/systemd/system/confidential-data-hub.service; disabled; vendor preset: enabled)
     Active: failed (Result: exit-code) since Wed 2024-09-25 12:59:36 UTC; 4min 11s ago
TriggeredBy: ● confidential-data-hub.path
    Process: 1012 ExecStart=/bin/bash -c if [ -f /run/peerpod/cdh.toml ];      then /usr/local/bin/confidential-data-hub -c /run/peerpod/cdh.toml;      else /usr/local/bin/confidential-data-hub;      fi (code=exited, s>
   Main PID: 1012 (code=exited, status=1/FAILURE)

If I try and launch it manually:

/usr/local/bin/confidential-data-hub -c /run/peerpod/cdh.toml
[2024-09-25T13:05:51Z INFO  confidential_data_hub::config] Use configuration file /run/peerpod/cdh.toml
Error: init Hub failed: kbs client creation failed: Kbs client error: create AA token provider failed: Attestation Agent token provider error: ttrpc connect failed Nix error: EACCES: Permission denied

The attestation agent is running, but the asr is not - I guess as it needs the cdh:

systemctl status attestation-agent.service
● attestation-agent.service - Attestation Agent TTRPC API Server
     Loaded: loaded (/etc/systemd/system/attestation-agent.service; enabled; vendor preset: enabled)
     Active: active (running) since Wed 2024-09-25 12:59:30 UTC; 7min ago
    Process: 861 ExecStartPre=/usr/bin/mkdir -p /run/confidential-containers/attestation-agent (code=exited, status=0/SUCCESS)
   Main PID: 869 (bash)
      Tasks: 4 (limit: 9508)
     Memory: 4.7M
     CGroup: /system.slice/attestation-agent.service
             ├─869 /bin/bash -c if [ -f /run/peerpod/aa.toml ];      then /usr/local/bin/attestation-agent -c /run/peerpod/aa.toml;      else /usr/local/bin/attestation-agent;      fi
             └─887 /usr/local/bin/attestation-agent -c /run/peerpod/aa.toml

Warning: some journal files were not opened due to insufficient permissions.
peerpod@podvm-kbs-key-release-76d2abd6:~$ systemctl status api-server-rest.service
● api-server-rest.service - CDH API Server Rest Service
     Loaded: loaded (/etc/systemd/system/api-server-rest.service; disabled; vendor preset: enabled)
     Active: inactive (dead)
TriggeredBy: ● api-server-rest.path
stevenhorsman commented 2 months ago

Bisecting and picking guest-components 8e90d7430476cf7759fbbb75d1cd6364095dd63a (with the old KBS) the test fails, but the cdh service starts. It's log shows the error:

Sep 25 13:20:08 podvm-kbs-key-release-bb7fab84 bash[873]: [2024-09-25T13:20:08Z INFO  confidential_data_hub::hub] get resource called: kbs:///reponame/workload_key/key.bin
Sep 25 13:20:12 podvm-kbs-key-release-bb7fab84 bash[873]: [2024-09-25T13:20:12Z ERROR ttrpc_cdh::ttrpc_server] [ttRPC CDH] GetResource :
Sep 25 13:20:12 podvm-kbs-key-release-bb7fab84 bash[873]:     get resource failed
Sep 25 13:20:12 podvm-kbs-key-release-bb7fab84 bash[873]:     Caused by: Kbs client error: get resource failed: get token failed: Attestation Agent token provider error: cal ttrpc failed: rpc status: Status { code: INTERNAL, message: "[ERROR:attestation-agent] AA-KBC get token failed", details: [], special_fields: SpecialFields { unknown_fields: UnknownFields { fields: None }, cached_size: CachedSize { size: 0 } } }

This is due to a protocol version error:

ep 25 13:20:12 podvm-kbs-key-release-bb7fab84 bash[861]: [2024-09-25T13:20:12Z ERROR ttrpc_aa::server] AA (ttrpc): get token failed
Sep 25 13:20:12 podvm-kbs-key-release-bb7fab84 bash[861]:      RCAR handshake failed: Unable to get token. RCAR handshake retried 5 times. Final attempt failed with: RCAR handshake failed: KBS request unauthorized, ErrorInformation: ErrorInformation { error_type: "https://github.com/confidential-containers/kbs/errors/FailedAuthentication", detail: "Authentication failed: Session: Invalid Request version 0.1.1" }

so I'll try switching back to the latest KBS

mkulke commented 2 months ago

If I try and launch it manually:

/usr/local/bin/confidential-data-hub -c /run/peerpod/cdh.toml
[2024-09-25T13:05:51Z INFO  confidential_data_hub::config] Use configuration file /run/peerpod/cdh.toml
Error: init Hub failed: kbs client creation failed: Kbs client error: create AA token provider failed: Attestation Agent

that's concerning. this doesn't have anything to do with kbs or protocol incompatibilities. that looks like CDH fails to open the AA socket.

stevenhorsman commented 2 months ago

Sorry, I found that there was logging in CDH I missed:

$ sudo journalctl -u confidential-data-hub.service
-- Logs begin at Wed 2024-09-25 13:59:31 UTC, end at Wed 2024-09-25 14:02:10 UTC. --
Sep 25 14:00:43 podvm-kbs-key-release-dafc1ee4 systemd[1]: Started Confidential Data Hub TTRPC API Server.
Sep 25 14:00:43 podvm-kbs-key-release-dafc1ee4 bash[877]: [2024-09-25T14:00:43Z INFO  confidential_data_hub::config] Use configuration file /run/peerpod/cdh.toml
Sep 25 14:00:43 podvm-kbs-key-release-dafc1ee4 bash[877]: [2024-09-25T14:00:43Z INFO  kms::plugins::kbs::cc_kbc] Use KBS public key cert
Sep 25 14:00:43 podvm-kbs-key-release-dafc1ee4 bash[877]: Error: init Hub failed: kbs client creation failed: Kbs client error: create kbs client failed: builder error
Sep 25 14:00:43 podvm-kbs-key-release-dafc1ee4 systemd[1]: confidential-data-hub.service: Main process exited, code=exited, status=1/FAILURE
Sep 25 14:00:43 podvm-kbs-key-release-dafc1ee4 systemd[1]: confidential-data-hub.service: Failed with result 'exit-code'.
Sep 25 14:00:44 podvm-kbs-key-release-dafc1ee4 systemd[1]: confidential-data-hub.service: Scheduled restart job, restart counter is at 1.
Sep 25 14:00:44 podvm-kbs-key-release-dafc1ee4 systemd[1]: Stopped Confidential Data Hub TTRPC API Server.
Sep 25 14:00:44 podvm-kbs-key-release-dafc1ee4 systemd[1]: Started Confidential Data Hub TTRPC API Server.
Sep 25 14:00:44 podvm-kbs-key-release-dafc1ee4 bash[969]: [2024-09-25T14:00:44Z INFO  confidential_data_hub::config] Use configuration file /run/peerpod/cdh.toml
Sep 25 14:00:44 podvm-kbs-key-release-dafc1ee4 bash[969]: [2024-09-25T14:00:44Z INFO  kms::plugins::kbs::cc_kbc] Use KBS public key cert
Sep 25 14:00:44 podvm-kbs-key-release-dafc1ee4 bash[969]: Error: init Hub failed: kbs client creation failed: Kbs client error: create kbs client failed: builder error
Sep 25 14:00:44 podvm-kbs-key-release-dafc1ee4 systemd[1]: confidential-data-hub.service: Main process exited, code=exited, status=1/FAILURE
Sep 25 14:00:44 podvm-kbs-key-release-dafc1ee4 systemd[1]: confidential-data-hub.service: Failed with result 'exit-code'.
Sep 25 14:00:46 podvm-kbs-key-release-dafc1ee4 systemd[1]: confidential-data-hub.service: Scheduled restart job, restart counter is at 2.
Sep 25 14:00:46 podvm-kbs-key-release-dafc1ee4 systemd[1]: Stopped Confidential Data Hub TTRPC API Server.
Sep 25 14:00:46 podvm-kbs-key-release-dafc1ee4 systemd[1]: Started Confidential Data Hub TTRPC API Server.
Sep 25 14:00:46 podvm-kbs-key-release-dafc1ee4 bash[983]: [2024-09-25T14:00:46Z INFO  confidential_data_hub::config] Use configuration file /run/peerpod/cdh.toml
Sep 25 14:00:46 podvm-kbs-key-release-dafc1ee4 bash[983]: [2024-09-25T14:00:46Z INFO  kms::plugins::kbs::cc_kbc] Use KBS public key cert
Sep 25 14:00:46 podvm-kbs-key-release-dafc1ee4 bash[983]: Error: init Hub failed: kbs client creation failed: Kbs client error: create kbs client failed: builder error
Sep 25 14:00:46 podvm-kbs-key-release-dafc1ee4 systemd[1]: confidential-data-hub.service: Main process exited, code=exited, status=1/FAILURE
Sep 25 14:00:46 podvm-kbs-key-release-dafc1ee4 systemd[1]: confidential-data-hub.service: Failed with result 'exit-code'.
Sep 25 14:00:47 podvm-kbs-key-release-dafc1ee4 systemd[1]: confidential-data-hub.service: Scheduled restart job, restart counter is at 3.
Sep 25 14:00:47 podvm-kbs-key-release-dafc1ee4 systemd[1]: Stopped Confidential Data Hub TTRPC API Server.
Sep 25 14:00:47 podvm-kbs-key-release-dafc1ee4 systemd[1]: Started Confidential Data Hub TTRPC API Server.
Sep 25 14:00:47 podvm-kbs-key-release-dafc1ee4 bash[988]: [2024-09-25T14:00:47Z INFO  confidential_data_hub::config] Use configuration file /run/peerpod/cdh.toml
Sep 25 14:00:47 podvm-kbs-key-release-dafc1ee4 bash[988]: [2024-09-25T14:00:47Z INFO  kms::plugins::kbs::cc_kbc] Use KBS public key cert
Sep 25 14:00:47 podvm-kbs-key-release-dafc1ee4 bash[988]: Error: init Hub failed: kbs client creation failed: Kbs client error: create kbs client failed: builder error
Sep 25 14:00:47 podvm-kbs-key-release-dafc1ee4 systemd[1]: confidential-data-hub.service: Main process exited, code=exited, status=1/FAILURE
Sep 25 14:00:47 podvm-kbs-key-release-dafc1ee4 systemd[1]: confidential-data-hub.service: Failed with result 'exit-code'.
Sep 25 14:00:48 podvm-kbs-key-release-dafc1ee4 systemd[1]: confidential-data-hub.service: Scheduled restart job, restart counter is at 4.
Sep 25 14:00:48 podvm-kbs-key-release-dafc1ee4 systemd[1]: Stopped Confidential Data Hub TTRPC API Server.
Sep 25 14:00:48 podvm-kbs-key-release-dafc1ee4 systemd[1]: Started Confidential Data Hub TTRPC API Server.
Sep 25 14:00:48 podvm-kbs-key-release-dafc1ee4 bash[993]: [2024-09-25T14:00:48Z INFO  confidential_data_hub::config] Use configuration file /run/peerpod/cdh.toml
Sep 25 14:00:48 podvm-kbs-key-release-dafc1ee4 bash[993]: [2024-09-25T14:00:48Z INFO  kms::plugins::kbs::cc_kbc] Use KBS public key cert
Sep 25 14:00:48 podvm-kbs-key-release-dafc1ee4 bash[993]: Error: init Hub failed: kbs client creation failed: Kbs client error: create kbs client failed: builder error
Sep 25 14:00:48 podvm-kbs-key-release-dafc1ee4 systemd[1]: confidential-data-hub.service: Main process exited, code=exited, status=1/FAILURE
Sep 25 14:00:48 podvm-kbs-key-release-dafc1ee4 systemd[1]: confidential-data-hub.service: Failed with result 'exit-code'.
Sep 25 14:00:49 podvm-kbs-key-release-dafc1ee4 systemd[1]: confidential-data-hub.service: Scheduled restart job, restart counter is at 5.
Sep 25 14:00:49 podvm-kbs-key-release-dafc1ee4 systemd[1]: Stopped Confidential Data Hub TTRPC API Server.
Sep 25 14:00:49 podvm-kbs-key-release-dafc1ee4 systemd[1]: confidential-data-hub.service: Start request repeated too quickly.
Sep 25 14:00:49 podvm-kbs-key-release-dafc1ee4 systemd[1]: confidential-data-hub.service: Failed with result 'exit-code'.
Sep 25 14:00:49 podvm-kbs-key-release-dafc1ee4 systemd[1]: Failed to start Confidential Data Hub TTRPC API Server.
stevenhorsman commented 2 months ago

I've tried about 10 different combinations in the /run/peerpod/cdh.toml (including some from the cdh config unit tests) and couldn't get them to work, so I've messaged Ding for advice, which hopefully will get us back on track.

stevenhorsman commented 2 months ago

All the libvirt tests worked manually for me:

time="2024-09-26T17:49:16Z" level=info msg="Deploying kbs"
time="2024-09-26T17:49:16Z" level=info msg="creating key.bin"
time="2024-09-26T17:49:16Z" level=info msg="Creating kbs install overlay"
time="2024-09-26T17:49:16Z" level=info msg="Customize the overlay yaml file"
time="2024-09-26T17:49:16Z" level=info msg="Updating kbs image with \"ghcr.io/confidential-containers/key-broker-service\""
time="2024-09-26T17:49:16Z" level=info msg="Updating kbs image tag with \"built-in-as-v0.10.1\""
time="2024-09-26T17:49:16Z" level=info msg="Creating kbs install overlay"
time="2024-09-26T17:49:16Z" level=info msg="Install Kbs"
Wait for the kbs deployment be available
time="2024-09-26T17:49:36Z" level=info msg="kbsEndpoint: http://192.168.122.122:32466"
time="2024-09-26T17:49:36Z" level=info msg="Podvm uploading"
time="2024-09-26T17:49:36Z" level=trace msg="UploadPodvm()"
time="2024-09-26T17:49:38Z" level=info msg="Install Cloud API Adaptor"
time="2024-09-26T17:49:38Z" level=info msg="Deploy the Cloud API Adaptor"
time="2024-09-26T17:49:38Z" level=info msg="Install the controller manager"
time="2024-09-26T17:49:40Z" level=trace msg="/usr/local/bin/kubectl apply -k https://github.com/confidential-containers/operator/config/default?ref=main, output: namespace/confidential-containers-system created\ncustomresourcedefinition.apiextensions.k8s.io/ccruntimes.confidentialcontainers.org created\nserviceaccount/cc-operator-controller-manager created\nrole.rbac.authorization.k8s.io/cc-operator-leader-election-role created\nclusterrole.rbac.authorization.k8s.io/cc-operator-manager-role created\nclusterrole.rbac.authorization.k8s.io/cc-operator-metrics-reader created\nclusterrole.rbac.authorization.k8s.io/cc-operator-proxy-role created\nrolebinding.rbac.authorization.k8s.io/cc-operator-leader-election-rolebinding created\nclusterrolebinding.rbac.authorization.k8s.io/cc-operator-manager-rolebinding created\nclusterrolebinding.rbac.authorization.k8s.io/cc-operator-proxy-rolebinding created\nconfigmap/cc-operator-manager-config created\nservice/cc-operator-controller-manager-metrics-service created\ndeployment.apps/cc-operator-controller-manager created\n"
Wait for the cc-operator-controller-manager deployment be available
time="2024-09-26T17:50:45Z" level=info msg="Customize the overlay yaml file"
time="2024-09-26T17:50:46Z" level=trace msg="/usr/local/bin/kubectl apply -k https://github.com/confidential-containers/operator/config/samples/ccruntime/peer-pods?ref=main, output: ccruntime.confidentialcontainers.org/ccruntime-peer-pods created\n"
time="2024-09-26T17:50:46Z" level=info msg="Install the cloud-api-adaptor"
Wait for the cloud-api-adaptor-daemonset DaemonSet be available
Wait for the pod cloud-api-adaptor-daemonset-h7z7h be ready
Wait for the cc-operator-daemon-install DaemonSet be available
Wait for the pod cc-operator-daemon-install-m5sbh be ready
Wait for the kata-remote runtimeclass be created
time="2024-09-26T17:53:08Z" level=info msg="Installing peerpod-ctrl"
time="2024-09-26T17:53:11Z" level=trace msg="/usr/bin/make -C ../peerpod-ctrl deploy, output: make[1]: Entering directory '/root/go/src/github.com/confidential-containers/cloud-api-adaptor/src/peerpod-ctrl'\ntest -s /root/go/src/github.com/confidential-containers/cloud-api-adaptor/src/peerpod-ctrl/bin/controller-gen || GOBIN=/root/go/src/github.com/confidential-containers/cloud-api-adaptor/src/peerpod-ctrl/bin go install sigs.k8s.io/controller-tools/cmd/controller-gen@v0.14.0\n/root/go/src/github.com/confidential-containers/cloud-api-adaptor/src/peerpod-ctrl/bin/controller-gen rbac:roleName=manager-role crd webhook paths=\"./...\" output:crd:artifacts:config=config/crd/bases\ncd config/manager && /root/go/src/github.com/confidential-containers/cloud-api-adaptor/src/peerpod-ctrl/bin/kustomize edit set image controller=quay.io/confidential-containers/peerpod-ctrl:latest\n/root/go/src/github.com/confidential-containers/cloud-api-adaptor/src/peerpod-ctrl/bin/kustomize build config/default | kubectl apply -f -\n# Warning: 'patchesStrategicMerge' is deprecated. Please use 'patches' instead. Run 'kustomize edit fix' to update your Kustomization automatically.\nnamespace/confidential-containers-system configured\ncustomresourcedefinition.apiextensions.k8s.io/peerpods.confidentialcontainers.org created\nserviceaccount/peerpod-ctrl-controller-manager created\nrole.rbac.authorization.k8s.io/peerpod-ctrl-leader-election-role created\nclusterrole.rbac.authorization.k8s.io/peerpod-ctrl-manager-role created\nclusterrole.rbac.authorization.k8s.io/peerpod-ctrl-metrics-reader created\nclusterrole.rbac.authorization.k8s.io/peerpod-ctrl-proxy-role created\nrolebinding.rbac.authorization.k8s.io/peerpod-ctrl-leader-election-rolebinding created\nclusterrolebinding.rbac.authorization.k8s.io/peerpod-ctrl-manager-rolebinding created\nclusterrolebinding.rbac.authorization.k8s.io/peerpod-ctrl-proxy-rolebinding created\nservice/peerpod-ctrl-controller-manager-metrics-service created\ndeployment.apps/peerpod-ctrl-controller-manager created\nmake[1]: Leaving directory '/root/go/src/github.com/confidential-containers/cloud-api-adaptor/src/peerpod-ctrl'\n"
time="2024-09-26T17:53:11Z" level=info msg="Wait for the peerpod-ctrl deployment to be available"
time="2024-09-26T17:53:46Z" level=info msg="Creating namespace 'coco-pp-e2e-test-f525ce84'..."
time="2024-09-26T17:53:46Z" level=info msg="Wait for namespace 'coco-pp-e2e-test-f525ce84' be ready..."
time="2024-09-26T17:53:51Z" level=info msg="Wait for default serviceaccount in namespace 'coco-pp-e2e-test-f525ce84'..."
time="2024-09-26T17:53:51Z" level=info msg="default serviceAccount exists, namespace 'coco-pp-e2e-test-f525ce84' is ready for use"
=== RUN   TestLibvirtCreateSimplePod
=== RUN   TestLibvirtCreateSimplePod/SimplePeerPod_test
    assessment_runner.go:264: Waiting for containers in pod: simple-test are ready
=== RUN   TestLibvirtCreateSimplePod/SimplePeerPod_test/PodVM_is_created
time="2024-09-26T17:55:31Z" level=trace msg="Test pod running on node peer-pods-worker-0"
=== NAME  TestLibvirtCreateSimplePod/SimplePeerPod_test
    assessment_runner.go:617: Deleting pod simple-test...
    assessment_runner.go:624: Pod simple-test has been successfully deleted within 60s
--- PASS: TestLibvirtCreateSimplePod (105.06s)
    --- PASS: TestLibvirtCreateSimplePod/SimplePeerPod_test (105.06s)
        --- PASS: TestLibvirtCreateSimplePod/SimplePeerPod_test/PodVM_is_created (0.03s)
=== RUN   TestLibvirtCreatePodWithConfigMap
=== RUN   TestLibvirtCreatePodWithConfigMap/ConfigMapPeerPod_test
    assessment_runner.go:264: Waiting for containers in pod: busybox-configmap-pod are ready
=== RUN   TestLibvirtCreatePodWithConfigMap/ConfigMapPeerPod_test/Configmap_is_created_and_contains_data
    assessment_runner.go:435: Output when execute test commands:
=== NAME  TestLibvirtCreatePodWithConfigMap/ConfigMapPeerPod_test
    assessment_runner.go:560: Deleting Configmap... busybox-configmap
    assessment_runner.go:617: Deleting pod busybox-configmap-pod...
    assessment_runner.go:624: Pod busybox-configmap-pod has been successfully deleted within 60s
--- PASS: TestLibvirtCreatePodWithConfigMap (110.14s)
    --- PASS: TestLibvirtCreatePodWithConfigMap/ConfigMapPeerPod_test (110.14s)
        --- PASS: TestLibvirtCreatePodWithConfigMap/ConfigMapPeerPod_test/Configmap_is_created_and_contains_data (5.09s)
=== RUN   TestLibvirtCreatePodWithSecret
=== RUN   TestLibvirtCreatePodWithSecret/SecretPeerPod_test
    assessment_runner.go:264: Waiting for containers in pod: busybox-secret-pod are ready
=== RUN   TestLibvirtCreatePodWithSecret/SecretPeerPod_test/Secret_has_been_created_and_contains_data
    assessment_runner.go:435: Output when execute test commands:
=== NAME  TestLibvirtCreatePodWithSecret/SecretPeerPod_test
    assessment_runner.go:567: Deleting Secret... busybox-secret
    assessment_runner.go:617: Deleting pod busybox-secret-pod...
    assessment_runner.go:624: Pod busybox-secret-pod has been successfully deleted within 60s
--- PASS: TestLibvirtCreatePodWithSecret (110.15s)
    --- PASS: TestLibvirtCreatePodWithSecret/SecretPeerPod_test (110.15s)
        --- PASS: TestLibvirtCreatePodWithSecret/SecretPeerPod_test/Secret_has_been_created_and_contains_data (5.11s)
=== RUN   TestLibvirtCreatePeerPodContainerWithExternalIPAccess
=== RUN   TestLibvirtCreatePeerPodContainerWithExternalIPAccess/IPAccessPeerPod_test
    assessment_runner.go:264: Waiting for containers in pod: busybox-priv are ready
=== RUN   TestLibvirtCreatePeerPodContainerWithExternalIPAccess/IPAccessPeerPod_test/Peer_Pod_Container_Connected_to_External_IP
    assessment_runner.go:435: Output when execute test commands:
=== NAME  TestLibvirtCreatePeerPodContainerWithExternalIPAccess/IPAccessPeerPod_test
    assessment_runner.go:617: Deleting pod busybox-priv...
    assessment_runner.go:624: Pod busybox-priv has been successfully deleted within 60s
--- PASS: TestLibvirtCreatePeerPodContainerWithExternalIPAccess (110.14s)
    --- PASS: TestLibvirtCreatePeerPodContainerWithExternalIPAccess/IPAccessPeerPod_test (110.14s)
        --- PASS: TestLibvirtCreatePeerPodContainerWithExternalIPAccess/IPAccessPeerPod_test/Peer_Pod_Container_Connected_to_External_IP (5.10s)
=== RUN   TestLibvirtCreatePeerPodWithJob
=== RUN   TestLibvirtCreatePeerPodWithJob/JobPeerPod_test
=== RUN   TestLibvirtCreatePeerPodWithJob/JobPeerPod_test/Job_has_been_created
    assessment_helpers.go:292: SUCCESS: job-pi-jzwff - Completed - LOG: 3.14156
    assessment_runner.go:336: Output Log from Pod: 3.14156
=== NAME  TestLibvirtCreatePeerPodWithJob/JobPeerPod_test
    assessment_runner.go:600: Deleting Job... job-pi
    assessment_runner.go:607: Deleting pods created by job... job-pi-jzwff
--- PASS: TestLibvirtCreatePeerPodWithJob (95.05s)
    --- PASS: TestLibvirtCreatePeerPodWithJob/JobPeerPod_test (95.05s)
        --- PASS: TestLibvirtCreatePeerPodWithJob/JobPeerPod_test/Job_has_been_created (0.02s)
=== RUN   TestLibvirtCreatePeerPodAndCheckUserLogs
    common_suite.go:165: Skipping Test until issue kata-containers/kata-containers#5732 is Fixed
--- SKIP: TestLibvirtCreatePeerPodAndCheckUserLogs (0.00s)
=== RUN   TestLibvirtCreatePeerPodAndCheckWorkDirLogs
=== RUN   TestLibvirtCreatePeerPodAndCheckWorkDirLogs/WorkDirPeerPod_test
=== RUN   TestLibvirtCreatePeerPodAndCheckWorkDirLogs/WorkDirPeerPod_test/Peer_pod_with_work_directory_has_been_created
    assessment_runner.go:366: Log output of peer pod:/other
=== NAME  TestLibvirtCreatePeerPodAndCheckWorkDirLogs/WorkDirPeerPod_test
    assessment_runner.go:617: Deleting pod workdirpod...
    assessment_runner.go:624: Pod workdirpod has been successfully deleted within 60s
--- PASS: TestLibvirtCreatePeerPodAndCheckWorkDirLogs (105.06s)
    --- PASS: TestLibvirtCreatePeerPodAndCheckWorkDirLogs/WorkDirPeerPod_test (105.06s)
        --- PASS: TestLibvirtCreatePeerPodAndCheckWorkDirLogs/WorkDirPeerPod_test/Peer_pod_with_work_directory_has_been_created (5.02s)
=== RUN   TestLibvirtCreatePeerPodAndCheckEnvVariableLogsWithImageOnly
=== RUN   TestLibvirtCreatePeerPodAndCheckEnvVariableLogsWithImageOnly/EnvVariablePeerPodWithImageOnly_test
=== RUN   TestLibvirtCreatePeerPodAndCheckEnvVariableLogsWithImageOnly/EnvVariablePeerPodWithImageOnly_test/Peer_pod_with_environmental_variables_has_been_created
    assessment_runner.go:366: Log output of peer pod:KUBERNETES_SERVICE_PORT=443
        KUBERNETES_PORT=tcp://10.96.0.1:443
        HOSTNAME=env-variable-in-image
        SHLVL=1
        HOME=/root
        KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1
        PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
        KUBERNETES_PORT_443_TCP_PORT=443
        KUBERNETES_PORT_443_TCP_PROTO=tcp
        KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443
        KUBERNETES_SERVICE_PORT_HTTPS=443
        ISPRODUCTION=false
        KUBERNETES_SERVICE_HOST=10.96.0.1
        PWD=/
=== NAME  TestLibvirtCreatePeerPodAndCheckEnvVariableLogsWithImageOnly/EnvVariablePeerPodWithImageOnly_test
    assessment_runner.go:617: Deleting pod env-variable-in-image...
    assessment_runner.go:624: Pod env-variable-in-image has been successfully deleted within 60s
--- PASS: TestLibvirtCreatePeerPodAndCheckEnvVariableLogsWithImageOnly (105.05s)
    --- PASS: TestLibvirtCreatePeerPodAndCheckEnvVariableLogsWithImageOnly/EnvVariablePeerPodWithImageOnly_test (105.05s)
        --- PASS: TestLibvirtCreatePeerPodAndCheckEnvVariableLogsWithImageOnly/EnvVariablePeerPodWithImageOnly_test/Peer_pod_with_environmental_variables_has_been_created (5.02s)
=== RUN   TestLibvirtCreatePeerPodAndCheckEnvVariableLogsWithDeploymentOnly
=== RUN   TestLibvirtCreatePeerPodAndCheckEnvVariableLogsWithDeploymentOnly/EnvVariablePeerPodWithDeploymentOnly_test
=== RUN   TestLibvirtCreatePeerPodAndCheckEnvVariableLogsWithDeploymentOnly/EnvVariablePeerPodWithDeploymentOnly_test/Peer_pod_with_environmental_variables_has_been_created
    assessment_runner.go:366: Log output of peer pod:KUBERNETES_PORT=tcp://10.96.0.1:443
        KUBERNETES_SERVICE_PORT=443
        HOSTNAME=env-variable-in-config
        SHLVL=1
        HOME=/root
        KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1
        PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
        KUBERNETES_PORT_443_TCP_PORT=443
        KUBERNETES_PORT_443_TCP_PROTO=tcp
        KUBERNETES_SERVICE_PORT_HTTPS=443
        KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443
        ISPRODUCTION=true
        KUBERNETES_SERVICE_HOST=10.96.0.1
        PWD=/
=== NAME  TestLibvirtCreatePeerPodAndCheckEnvVariableLogsWithDeploymentOnly/EnvVariablePeerPodWithDeploymentOnly_test
    assessment_runner.go:617: Deleting pod env-variable-in-config...
    assessment_runner.go:624: Pod env-variable-in-config has been successfully deleted within 60s
--- PASS: TestLibvirtCreatePeerPodAndCheckEnvVariableLogsWithDeploymentOnly (105.06s)
    --- PASS: TestLibvirtCreatePeerPodAndCheckEnvVariableLogsWithDeploymentOnly/EnvVariablePeerPodWithDeploymentOnly_test (105.06s)
        --- PASS: TestLibvirtCreatePeerPodAndCheckEnvVariableLogsWithDeploymentOnly/EnvVariablePeerPodWithDeploymentOnly_test/Peer_pod_with_environmental_variables_has_been_created (5.02s)
=== RUN   TestLibvirtCreatePeerPodAndCheckEnvVariableLogsWithImageAndDeployment
=== RUN   TestLibvirtCreatePeerPodAndCheckEnvVariableLogsWithImageAndDeployment/EnvVariablePeerPodWithBoth_test
=== RUN   TestLibvirtCreatePeerPodAndCheckEnvVariableLogsWithImageAndDeployment/EnvVariablePeerPodWithBoth_test/Peer_pod_with_environmental_variables_has_been_created
    assessment_runner.go:366: Log output of peer pod:KUBERNETES_SERVICE_PORT=443
        KUBERNETES_PORT=tcp://10.96.0.1:443
        HOSTNAME=env-variable-in-both
        SHLVL=1
        HOME=/root
        KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1
        PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
        KUBERNETES_PORT_443_TCP_PORT=443
        KUBERNETES_PORT_443_TCP_PROTO=tcp
        KUBERNETES_SERVICE_PORT_HTTPS=443
        KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443
        ISPRODUCTION=true
        KUBERNETES_SERVICE_HOST=10.96.0.1
        PWD=/
=== NAME  TestLibvirtCreatePeerPodAndCheckEnvVariableLogsWithImageAndDeployment/EnvVariablePeerPodWithBoth_test
    assessment_runner.go:617: Deleting pod env-variable-in-both...
    assessment_runner.go:624: Pod env-variable-in-both has been successfully deleted within 60s
--- PASS: TestLibvirtCreatePeerPodAndCheckEnvVariableLogsWithImageAndDeployment (105.05s)
    --- PASS: TestLibvirtCreatePeerPodAndCheckEnvVariableLogsWithImageAndDeployment/EnvVariablePeerPodWithBoth_test (105.05s)
        --- PASS: TestLibvirtCreatePeerPodAndCheckEnvVariableLogsWithImageAndDeployment/EnvVariablePeerPodWithBoth_test/Peer_pod_with_environmental_variables_has_been_created (5.02s)
=== RUN   TestLibvirtCreateNginxDeployment
=== RUN   TestLibvirtCreateNginxDeployment/Nginx_image_deployment_test
    nginx_deployment.go:106: Creating nginx deployment...
    nginx_deployment.go:163: Current deployment available replicas: 0
    nginx_deployment.go:163: Current deployment available replicas: 0
    nginx_deployment.go:163: Current deployment available replicas: 0
    nginx_deployment.go:163: Current deployment available replicas: 0
    nginx_deployment.go:163: Current deployment available replicas: 0
    nginx_deployment.go:163: Current deployment available replicas: 0
    nginx_deployment.go:163: Current deployment available replicas: 0
    nginx_deployment.go:163: Current deployment available replicas: 0
    nginx_deployment.go:163: Current deployment available replicas: 0
    nginx_deployment.go:163: Current deployment available replicas: 0
    nginx_deployment.go:163: Current deployment available replicas: 0
    nginx_deployment.go:163: Current deployment available replicas: 0
    nginx_deployment.go:163: Current deployment available replicas: 2
    nginx_deployment.go:111: nginx deployment is available now
=== RUN   TestLibvirtCreateNginxDeployment/Nginx_image_deployment_test/Access_for_nginx_deployment_test
=== NAME  TestLibvirtCreateNginxDeployment/Nginx_image_deployment_test
    nginx_deployment.go:136: Deleting webserver deployment...
    nginx_deployment.go:141: Deleting deployment nginx-deployment...
    nginx_deployment.go:148: Deployment nginx-deployment has been successfully deleted within 120s
--- PASS: TestLibvirtCreateNginxDeployment (135.03s)
    --- PASS: TestLibvirtCreateNginxDeployment/Nginx_image_deployment_test (135.03s)
        --- PASS: TestLibvirtCreateNginxDeployment/Nginx_image_deployment_test/Access_for_nginx_deployment_test (0.00s)
=== RUN   TestLibvirtDeletePod
=== RUN   TestLibvirtDeletePod/DeletePod_test
    assessment_runner.go:264: Waiting for containers in pod: deletion-test are ready
=== RUN   TestLibvirtDeletePod/DeletePod_test/Deletion_complete
=== NAME  TestLibvirtDeletePod/DeletePod_test
    assessment_runner.go:617: Deleting pod deletion-test...
    assessment_runner.go:624: Pod deletion-test has been successfully deleted within 60s
--- PASS: TestLibvirtDeletePod (105.04s)
    --- PASS: TestLibvirtDeletePod/DeletePod_test (105.04s)
        --- PASS: TestLibvirtDeletePod/DeletePod_test/Deletion_complete (0.00s)
=== RUN   TestLibvirtPodToServiceCommunication
=== RUN   TestLibvirtPodToServiceCommunication/TestExtraPods_test
    assessment_runner.go:264: Waiting for containers in pod: test-server are ready
    assessment_runner.go:297: webserver service is available on cluster IP: 10.102.170.65
    assessment_runner.go:301: Provision extra pod test-client
    assessment_helpers.go:385: Waiting for containers in pod: test-client are ready
=== RUN   TestLibvirtPodToServiceCommunication/TestExtraPods_test/Failed_to_test_extra_pod.
=== NAME  TestLibvirtPodToServiceCommunication
    common_suite.go:428: Success to access nginx service. <!DOCTYPE html>
        <html>
        <head>
        <title>Welcome to nginx!</title>
        <style>
        html { color-scheme: light dark; }
        body { width: 35em; margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif; }
        </style>
        </head>
        <body>
        <h1>Welcome to nginx!</h1>
        <p>If you see this page, the nginx web server is successfully installed and
        working. Further configuration is required.</p>

        <p>For online documentation and support please refer to
        <a href="http://nginx.org/">nginx.org</a>.<br/>
        Commercial support is available at
        <a href="http://nginx.com/">nginx.com</a>.</p>

        <p><em>Thank you for using nginx.</em></p>
        </body>
        </html>
=== NAME  TestLibvirtPodToServiceCommunication/TestExtraPods_test/Failed_to_test_extra_pod.
    assessment_runner.go:532: Output when execute test commands:<!DOCTYPE html>
        <html>
        <head>
        <title>Welcome to nginx!</title>
        <style>
        html { color-scheme: light dark; }
        body { width: 35em; margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif; }
        </style>
        </head>
        <body>
        <h1>Welcome to nginx!</h1>
        <p>If you see this page, the nginx web server is successfully installed and
        working. Further configuration is required.</p>

        <p>For online documentation and support please refer to
        <a href="http://nginx.org/">nginx.org</a>.<br/>
        Commercial support is available at
        <a href="http://nginx.com/">nginx.com</a>.</p>

        <p><em>Thank you for using nginx.</em></p>
        </body>
        </html>
=== NAME  TestLibvirtPodToServiceCommunication/TestExtraPods_test
    assessment_runner.go:617: Deleting pod test-server...
    assessment_runner.go:624: Pod test-server has been successfully deleted within 60s
    assessment_runner.go:630: Deleting pod test-client...
    assessment_runner.go:636: Pod test-client has been successfully deleted within 60s
    assessment_runner.go:652: Deleting Service... nginx-server
--- PASS: TestLibvirtPodToServiceCommunication (235.20s)
    --- PASS: TestLibvirtPodToServiceCommunication/TestExtraPods_test (235.20s)
        --- PASS: TestLibvirtPodToServiceCommunication/TestExtraPods_test/Failed_to_test_extra_pod. (5.10s)
=== RUN   TestLibvirtPodsMTLSCommunication
=== RUN   TestLibvirtPodsMTLSCommunication/TestPodsMTLSCommunication_test
    assessment_runner.go:264: Waiting for containers in pod: mtls-server are ready
    assessment_runner.go:297: webserver service is available on cluster IP: 10.111.67.16
    assessment_runner.go:301: Provision extra pod mtls-client
    assessment_helpers.go:385: Waiting for containers in pod: mtls-client are ready
=== RUN   TestLibvirtPodsMTLSCommunication/TestPodsMTLSCommunication_test/Pods_communication_with_mTLS
=== NAME  TestLibvirtPodsMTLSCommunication
    common_suite.go:522: Success to access nginx service. <!DOCTYPE html>
        <html>
        <head>
        <title>Welcome to nginx!</title>
        <style>
        html { color-scheme: light dark; }
        body { width: 35em; margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif; }
        </style>
        </head>
        <body>
        <h1>Welcome to nginx!</h1>
        <p>If you see this page, the nginx web server is successfully installed and
        working. Further configuration is required.</p>

        <p>For online documentation and support please refer to
        <a href="http://nginx.org/">nginx.org</a>.<br/>
        Commercial support is available at
        <a href="http://nginx.com/">nginx.com</a>.</p>

        <p><em>Thank you for using nginx.</em></p>
        </body>
        </html>
=== NAME  TestLibvirtPodsMTLSCommunication/TestPodsMTLSCommunication_test/Pods_communication_with_mTLS
    assessment_runner.go:532: Output when execute test commands:<!DOCTYPE html>
        <html>
        <head>
        <title>Welcome to nginx!</title>
        <style>
        html { color-scheme: light dark; }
        body { width: 35em; margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif; }
        </style>
        </head>
        <body>
        <h1>Welcome to nginx!</h1>
        <p>If you see this page, the nginx web server is successfully installed and
        working. Further configuration is required.</p>

        <p>For online documentation and support please refer to
        <a href="http://nginx.org/">nginx.org</a>.<br/>
        Commercial support is available at
        <a href="http://nginx.com/">nginx.com</a>.</p>

        <p><em>Thank you for using nginx.</em></p>
        </body>
        </html>
=== NAME  TestLibvirtPodsMTLSCommunication/TestPodsMTLSCommunication_test
    assessment_runner.go:560: Deleting Configmap... nginx-conf
    assessment_runner.go:567: Deleting Secret... server-certs
    assessment_runner.go:586: Deleting extra Secret... curl-certs
    assessment_runner.go:617: Deleting pod mtls-server...
    assessment_runner.go:624: Pod mtls-server has been successfully deleted within 60s
    assessment_runner.go:630: Deleting pod mtls-client...
    assessment_runner.go:636: Pod mtls-client has been successfully deleted within 60s
    assessment_runner.go:652: Deleting Service... nginx-mtls
--- PASS: TestLibvirtPodsMTLSCommunication (225.27s)
    --- PASS: TestLibvirtPodsMTLSCommunication/TestPodsMTLSCommunication_test (225.27s)
        --- PASS: TestLibvirtPodsMTLSCommunication/TestPodsMTLSCommunication_test/Pods_communication_with_mTLS (5.12s)
=== RUN   TestLibvirtKbsKeyRelease
time="2024-09-26T18:21:22Z" level=info msg="set key resource: ../../kbs/config/kubernetes/overlays/x86_64/key.bin"
time="2024-09-26T18:21:22Z" level=trace msg="./kbs-client --url http://192.168.122.122:32466 config --auth-private-key ../../kbs/config/kubernetes/base/kbs.key set-resource --path reponame/workload_key/key.bin --resource-file ../../kbs/config/kubernetes/overlays/x86_64/key.bin, output: Error: Request Failed, Response: \"{\\\"type\\\":\\\"https://github.com/confidential-containers/kbs/errors/SetSecretFailed\\\",\\\"detail\\\":\\\"Set secret failed: write local fs\\\"}\"\n"
time="2024-09-26T18:21:22Z" level=info msg="EnableKbsCustomizedPolicy: ../../kbs/sample_policies/allow_all.rego"
time="2024-09-26T18:21:22Z" level=trace msg="./kbs-client --url http://192.168.122.122:32466 config --auth-private-key ../../kbs/config/kubernetes/base/kbs.key set-resource-policy --policy-file ../../kbs/sample_policies/allow_all.rego, output: Set resource policy success \n policy: CnBhY2thZ2UgcG9saWN5CgpkZWZhdWx0IGFsbG93ID0gdHJ1ZQoK\n"
time="2024-09-26T18:21:22Z" level=info msg="EnableKbsCustomizedPolicy: ../../kbs/sample_policies/deny_all.rego"
time="2024-09-26T18:21:22Z" level=trace msg="./kbs-client --url http://192.168.122.122:32466 config --auth-private-key ../../kbs/config/kubernetes/base/kbs.key set-attestation-policy --policy-file ../../kbs/sample_policies/deny_all.rego, output: Set attestation policy success \n policy: CnBhY2thZ2UgcG9saWN5CgpkZWZhdWx0IGFsbG93ID0gZmFsc2UK\n"
=== PAUSE TestLibvirtKbsKeyRelease
=== RUN   TestLibvirtRestrictivePolicyBlocksExec
=== RUN   TestLibvirtRestrictivePolicyBlocksExec/PodVMwithPolicyBlockingExec_test
    assessment_runner.go:264: Waiting for containers in pod: policy-exec-rejected are ready
=== RUN   TestLibvirtRestrictivePolicyBlocksExec/PodVMwithPolicyBlockingExec_test/Pod_which_blocks_Exec_Process
=== NAME  TestLibvirtRestrictivePolicyBlocksExec
    common_suite.go:636: Exec process was blocked Internal error occurred: error executing command in container: failed to exec in container: failed to start exec "c45bbc5d05b899e67b2f44a79a6cc3f35bac14af607cb8619c3c5212d8a4af78": cannot enter container 6e1ca9a0ebcb2a181384757fb70de5c31f4deeb88778580cca21889803dec7dd, with err rpc error: code = PermissionDenied desc = "ExecProcessRequest is blocked by policy: ": unknown
=== NAME  TestLibvirtRestrictivePolicyBlocksExec/PodVMwithPolicyBlockingExec_test/Pod_which_blocks_Exec_Process
    assessment_runner.go:435: Output when execute test commands:
=== NAME  TestLibvirtRestrictivePolicyBlocksExec/PodVMwithPolicyBlockingExec_test
    assessment_runner.go:617: Deleting pod policy-exec-rejected...
    assessment_runner.go:624: Pod policy-exec-rejected has been successfully deleted within 60s
--- PASS: TestLibvirtRestrictivePolicyBlocksExec (110.09s)
    --- PASS: TestLibvirtRestrictivePolicyBlocksExec/PodVMwithPolicyBlockingExec_test (110.09s)
        --- PASS: TestLibvirtRestrictivePolicyBlocksExec/PodVMwithPolicyBlockingExec_test/Pod_which_blocks_Exec_Process (5.05s)
=== RUN   TestLibvirtPermissivePolicyAllowsExec
=== RUN   TestLibvirtPermissivePolicyAllowsExec/PodVMwithPermissivePolicy_test
    assessment_runner.go:264: Waiting for containers in pod: policy-all-allowed are ready
=== RUN   TestLibvirtPermissivePolicyAllowsExec/PodVMwithPermissivePolicy_test/Pod_which_allows_all_kata_agent_APIs
    assessment_runner.go:435: Output when execute test commands:
=== NAME  TestLibvirtPermissivePolicyAllowsExec/PodVMwithPermissivePolicy_test
    assessment_runner.go:617: Deleting pod policy-all-allowed...
    assessment_runner.go:624: Pod policy-all-allowed has been successfully deleted within 60s
--- PASS: TestLibvirtPermissivePolicyAllowsExec (110.15s)
    --- PASS: TestLibvirtPermissivePolicyAllowsExec/PodVMwithPermissivePolicy_test (110.15s)
        --- PASS: TestLibvirtPermissivePolicyAllowsExec/PodVMwithPermissivePolicy_test/Pod_which_allows_all_kata_agent_APIs (5.11s)
=== RUN   TestLibvirtCreatePeerPodWithAuthenticatedImageWithoutCredentials
    libvirt_test.go:152: Authenticated Image Name not exported
--- SKIP: TestLibvirtCreatePeerPodWithAuthenticatedImageWithoutCredentials (0.00s)
=== RUN   TestLibvirtCreatePeerPodWithAuthenticatedImageWithValidCredentials
    libvirt_test.go:161: Registry Credentials, or authenticated image name not exported
--- SKIP: TestLibvirtCreatePeerPodWithAuthenticatedImageWithValidCredentials (0.00s)
=== CONT  TestLibvirtKbsKeyRelease
    common_suite.go:573: Do test kbs key release failure case
=== RUN   TestLibvirtKbsKeyRelease/DoTestKbsKeyReleaseForFailure_test
    assessment_runner.go:264: Waiting for containers in pod: kbs-failure are ready
=== RUN   TestLibvirtKbsKeyRelease/DoTestKbsKeyReleaseForFailure_test/Kbs_key_release_is_failed
=== NAME  TestLibvirtKbsKeyRelease
    common_suite.go:592: PASS as failed to access key.bin:
=== NAME  TestLibvirtKbsKeyRelease/DoTestKbsKeyReleaseForFailure_test/Kbs_key_release_is_failed
    assessment_runner.go:435: Output when execute test commands:
=== NAME  TestLibvirtKbsKeyRelease/DoTestKbsKeyReleaseForFailure_test
    assessment_runner.go:617: Deleting pod kbs-failure...
    assessment_runner.go:624: Pod kbs-failure has been successfully deleted within 60s
=== NAME  TestLibvirtKbsKeyRelease
    libvirt_test.go:131: KBS normal cases
time="2024-09-26T18:26:52Z" level=info msg="EnableKbsCustomizedPolicy: ../../kbs/sample_policies/allow_all.rego"
time="2024-09-26T18:26:52Z" level=trace msg="./kbs-client --url http://192.168.122.122:32466 config --auth-private-key ../../kbs/config/kubernetes/base/kbs.key set-attestation-policy --policy-file ../../kbs/sample_policies/allow_all.rego, output: Set attestation policy success \n policy: CnBhY2thZ2UgcG9saWN5CgpkZWZhdWx0IGFsbG93ID0gdHJ1ZQoK\n"
    common_suite.go:549: Do test kbs key release
=== RUN   TestLibvirtKbsKeyRelease/KbsKeyReleasePod_test
    assessment_runner.go:264: Waiting for containers in pod: kbs-key-release are ready
=== RUN   TestLibvirtKbsKeyRelease/KbsKeyReleasePod_test/Kbs_key_release_is_successful
=== NAME  TestLibvirtKbsKeyRelease
    common_suite.go:557: Success to get key.bin: This is my cluster name:
=== NAME  TestLibvirtKbsKeyRelease/KbsKeyReleasePod_test/Kbs_key_release_is_successful
    assessment_runner.go:435: Output when execute test commands: This is my cluster name:
=== NAME  TestLibvirtKbsKeyRelease/KbsKeyReleasePod_test
    assessment_runner.go:617: Deleting pod kbs-key-release...
    assessment_runner.go:624: Pod kbs-key-release has been successfully deleted within 60s
--- PASS: TestLibvirtKbsKeyRelease (214.71s)
    --- PASS: TestLibvirtKbsKeyRelease/DoTestKbsKeyReleaseForFailure_test (109.26s)
        --- PASS: TestLibvirtKbsKeyRelease/DoTestKbsKeyReleaseForFailure_test/Kbs_key_release_is_failed (9.23s)
    --- PASS: TestLibvirtKbsKeyRelease/KbsKeyReleasePod_test (105.26s)
        --- PASS: TestLibvirtKbsKeyRelease/KbsKeyReleasePod_test/Kbs_key_release_is_successful (5.22s)
PASS
ok      github.com/confidential-containers/cloud-api-adaptor/src/cloud-api-adaptor/test/e2e 2745.866s

On the CI the TestLibvirtPodToServiceCommunication seems to be an issue that breaks the others, so I'll try and skip it to see if we can get a pass