confidential-containers / enclave-cc

Process-based Confidential Container Runtime
Apache License 2.0
70 stars 40 forks source link

Attestation: Verifier evaluate failed: SGX Verifier: REPORT_DATA is different from that in SGX Quote #368

Closed niteeshkd closed 4 months ago

niteeshkd commented 4 months ago

When I try to create enclave-cc in HW mode with an encrypted image which requires attestation, it fails with the following error.

$ kubectl describe pod enclave-cc-pod
...
Events:
  Type     Reason     Age                  From               Message
  ----     ------     ----                 ----               -------
  Normal   Scheduled  11m                  default-scheduler  Successfully assigned default/enclave-cc-pod to b77r44u11-node
  Warning  Failed     11m                  kubelet            Failed to pull image "docker.io/niteeshkd/occlum-test:enc2": rpc error: code = Internal desc = Security validate failed: RCAR handshake failed: KBS attest unauthorized, Error Info: ErrorInformation { error_type: "https://github.com/confidential-containers/kbs/errors/AttestationFailed", detail: "Attestation failed: status: Aborted, message: \"Attestation: Verifier evaluate failed: SGX Verifier: REPORT_DATA is different from that in SGX Quote\", details: [], metadata: MetadataMap { headers: {\"content-type\": \"application/grpc\", \"date\": \"Thu, 07 Mar 2024 19:40:50 GMT\", \"content-length\": \"0\"} }" }

KBS log:

$ docker logs trustee-kbs-1
...
[2024-03-07T19:46:30Z INFO  actix_web::middleware::logger] 172.18.0.1 "POST /kbs/v0/attest HTTP/1.1" 401 406 "-" "attestation-agent-kbs-client/0.1.0" 0.034660
[2024-03-07T19:51:42Z INFO  api_server::http::resource] Get pkey from auth header
[2024-03-07T19:51:42Z INFO  actix_web::middleware::logger] 172.18.0.1 "GET /kbs/v0/resource/default/security-policy/test HTTP/1.1" 401 173 "-" "attestation-agent-kbs-client/0.1.0" 0.000337
[2024-03-07T19:51:42Z INFO  api_server::http::attest] request: Json(Request { version: "0.1.0", tee: Sgx, extra_params: "" })
[2024-03-07T19:51:42Z INFO  actix_web::middleware::logger] 172.18.0.1 "POST /kbs/v0/auth HTTP/1.1" 200 74 "-" "attestation-agent-kbs-client/0.1.0" 0.000170
[2024-03-07T19:51:42Z INFO  api_server::http::attest] Cookie 8e202b8ec1344f2a8d4e0c87f66f2a10 attestation Json(Attestation { tee_pubkey: TeePubKey { kty: "RSA", alg: "RSA1_5", k_mod: "tMPfNbkqhn0UkiM1XJBRzVsmxw_A-KJP7Zsd_havVdj1V_GOIr54dEx2c4bvz0J4QnNsblytJ04wM2WA2K1eOQm7TbiOmRdM9MCEPXXTCTCzh-51CqWB8bJnzT-ky9mqTUjgEmZoEjU_eb_vsCE0MKtqe9NnHK6Qc1YNhlfl9xkHfmLDob8egka5lY3JVED_RBdTbkicyCsRNbsn7dJTBLTCSvexDwDRKnDzRv7RES2o6Ys_6JGS2KVNSiZl1-HkUw2-sI39atLB4QY0eebA4jOKPdEl8Ph7Ghc6agXQP_erYbqEUQed89K32iR0CwjucsQc-ZDIVkKrDusPojCUyw", k_exp: "AQAB" }, tee_evidence: "{\"quote\":\"AwACAAAAAAAKAA....VJUSUZJQ0FURS0tLS0tCgA=\"}" })
[2024-03-07T19:51:42Z INFO  actix_web::middleware::logger] 172.18.0.1 "POST /kbs/v0/attest HTTP/1.1" 401 406 "-" "attestation-agent-kbs-client/0.1.0" 0.033967

AS log:

$ docker logs trustee-as-1
[2024-03-07T19:35:00Z INFO  grpc_as] CoCo AS: 
    v0.1.0
    commit: 
    buildtime: 2024-03-06 18:40:46 +00:00
[2024-03-07T19:35:00Z INFO  grpc_as::grpc] Listen socket: 0.0.0.0:50004
[2024-03-07T19:35:00Z INFO  attestation_service::rvps] connect to remote RVPS: http://rvps:50003
[2024-03-07T19:35:00Z INFO  attestation_service::token::simple] No Token Signer key in config file, create an ephemeral key and without CA pubkey cert
[2024-03-07T19:40:50Z WARN  verifier] The input REPORT_DATA of SGX is shorter than 64 bytes, will be padded with '\0'.
[2024-03-07T19:41:05Z WARN  verifier] The input REPORT_DATA of SGX is shorter than 64 bytes, will be padded with '\0'.
[2024-03-07T19:41:29Z WARN  verifier] The input REPORT_DATA of SGX is shorter than 64 bytes, will be padded with '\0'.
[2024-03-07T19:42:18Z WARN  verifier] The input REPORT_DATA of SGX is shorter than 64 bytes, will be padded with '\0'.
[2024-03-07T19:43:42Z WARN  verifier] The input REPORT_DATA of SGX is shorter than 64 bytes, will be padded with '\0'.
[2024-03-07T19:46:30Z WARN  verifier] The input REPORT_DATA of SGX is shorter than 64 bytes, will be padded with '\0'.
[2024-03-07T19:51:42Z WARN  verifier] The input REPORT_DATA of SGX is shorter than 64 bytes, will be padded with '\0'.
$ sudo cat /run/containerd/agent-enclave/<cid>/stderr 
...
[2024-03-07T19:51:42Z INFO  enclave_agent::services::images] Pulling "docker.io/niteeshkd/occlum-test:enc2"
[2024-03-07T19:51:42Z INFO  image_rs::resource::kbs] secure channel uses native-aa
[2024-03-07T19:51:42Z WARN  kbs_protocol::client::rcar_client] Authenticating with KBS failed. Perform a new RCAR handshake: ErrorInformation {
        error_type: "https://github.com/confidential-containers/kbs/errors/InvalidRequest",
        detail: "The request is invalid: parse Authorization header failed: invalid Header provided",
    }
[2024-03-07T19:56:46Z INFO  enclave_agent::services::images] Pulling "docker.io/niteeshkd/occlum-test:enc2"
[2024-03-07T19:56:46Z INFO  image_rs::resource::kbs] secure channel uses native-aa
[2024-03-07T19:56:46Z WARN  kbs_protocol::client::rcar_client] Authenticating with KBS failed. Perform a new RCAR handshake: ErrorInformation {
        error_type: "https://github.com/confidential-containers/kbs/errors/InvalidRequest",
        detail: "The request is invalid: parse Authorization header failed: invalid Header provided",
    }

I used the main branch of coco operator to deploy enclave-cc runtimeclass .

@mythi @Xynnn007

mythi commented 4 months ago

I used the main branch of coco operator to deploy enclave-cc runtimeclass .

@niteeshkd I think this gives you v0.8.0 "runtime payload" (enclave-cc installation) but the KBS/AS setup does not match the same release. There were breaking changes on KBS/AS side after v0.8.0 so it's probably best to install "latest" runtime payload to get this to work with Trustee

niteeshkd commented 4 months ago

@mythi You are right that using the latest runtime payload the above error disappears.

Now, i am getting the following error while strating enclave-cc in HW mode.

$ kubectl describe pod enclave-cc-pod
...
    Command:
      /run/rune/boot_instance/build/bin/occlum-run
      /bin/hello_world
    State:          Waiting
      Reason:       ImagePullBackOff
    Last State:     Terminated
      Reason:       StartError
      Message:      failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: exec: "/run/rune/boot_instance/build/bin/occlum-run": stat /run/rune/boot_instance/build/bin/occlum-run: no such file or directory: unknown
...   
    Events:
  Type     Reason     Age                From               Message
  ----     ------     ----               ----               -------
  Normal   Scheduled  2m                 default-scheduler  Successfully assigned default/enclave-cc-pod to b77r44u11-node
  Normal   Pulled     76s                kubelet            Successfully pulled image "docker.io/niteeshkd/occlum-test:enc2" in 43.273s (43.273s including waiting)
  Normal   Created    76s                kubelet            Created container hello-world
  Warning  Failed     76s                kubelet            Error: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: exec: "/run/rune/boot_instance/build/bin/occlum-run": stat /run/rune/boot_instance/build/bin/occlum-run: no such file or directory: unknown
  Normal   BackOff    47s                kubelet            Back-off pulling image "docker.io/niteeshkd/occlum-test:enc2"
  Warning  Failed     47s                kubelet            Error: ImagePullBackOff
  Normal   Pulling    34s (x4 over 2m)   kubelet            Pulling image "docker.io/niteeshkd/occlum-test:enc2"
  Warning  Failed     34s (x3 over 75s)  kubelet            Failed to pull image "docker.io/niteeshkd/occlum-test:enc2": rpc error: code = Internal desc = failed to mount "sefs" to "/run/enclave-cc/containers/occlum-test_enc2/rootfs", with error: EIO: I/O error
  Warning  Failed     34s (x3 over 75s)  kubelet            Error: ErrImagePull
  Warning  BackOff    7s (x3 over 74s)   kubelet            Back-off restarting failed container hello-world in pod enclave-cc-pod_default(0f9bfba9-57ac-41e0-bb25-ef69f14c657a)
$ sudo journalctl -xeu containerd | tail -5
Mar 11 16:07:24 b77r44u11-node containerd[3063892]: time="2024-03-11T16:07:24.147019416Z" level=info msg="TaskManager get ImageService succeed." id=e5d510757f43820213cd11494c775ff571df3eff74e9b895a5148b6418bcbfaf
Mar 11 16:07:24 b77r44u11-node containerd[3063892]: time="2024-03-11T16:07:24.147774067Z" level=info msg="New client" source=agent_enclave_container url="tcp://127.0.0.1:7788"
Mar 11 16:07:24 b77r44u11-node containerd[3063892]: time="2024-03-11T16:07:24.909635703Z" level=error msg="agent enclave container pull image" error="rpc error: code = Internal desc = failed to mount \"sefs\" to \"/run/enclave-cc/containers/occlum-test_enc2/rootfs\", with error: EIO: I/O error" source=agent_enclave_container
Mar 11 16:07:24 b77r44u11-node containerd[3063892]: time="2024-03-11T16:07:24.909749099Z" level=error msg="rune runtime PullImage err. rpc error: code = Internal desc = failed to mount \"sefs\" to \"/run/enclave-cc/containers/occlum-test_enc2/rootfs\", with error: EIO: I/O error"
Mar 11 16:07:24 b77r44u11-node containerd[3063892]: time="2024-03-11T16:07:24.910018032Z" level=error msg="PullImage \"docker.io/niteeshkd/occlum-test:enc2\" failed" error="rpc error: code = Internal desc = failed to mount \"sefs\" to \"/run/enclave-cc/containers/occlum-test_enc2/rootfs\", with error: EIO: I/O error
$ sudo cat /run/containerd/agent-enclave/a9b06d3331debf8de387fd004be59dc6849bdba4e46aeb96027b02cd45694dc5/stderr
[2024-03-11T16:01:08Z INFO  enclave_agent] ttRPC server started: "tcp://127.0.0.1:7788"
[2024-03-11T16:01:08Z INFO  enclave_agent::services::images] Pulling "docker.io/niteeshkd/occlum-test:enc2"
[2024-03-11T16:01:08Z INFO  image_rs::resource::kbs] secure channel uses native-aa
[2024-03-11T16:01:09Z WARN  kbs_protocol::client::rcar_client] Authenticating with KBS failed. Perform a new RCAR handshake: ErrorInformation {
        error_type: "https://github.com/confidential-containers/kbs/errors/InvalidRequest",
        detail: "The request is invalid: parse Authorization header failed: invalid Header provided",
    }
[2024-03-11T16:01:09Z INFO  sigstore::cosign::client_builder] Rekor public key not provided. Rekor integration disabled
[2024-03-11T16:01:09Z INFO  sigstore::cosign::client_builder] No Fulcio cert has been provided. Fulcio integration disabled
[2024-03-11T16:01:10Z INFO  sigstore::cosign::signature_layers] Ignoring bundle, rekor public key not provided to verification client bundle="{\"SignedEntryTimestamp\":\"MEUCIQC4lVD0V5TL6h0SPaqQe6RVGLW2YvnpqtYvvbWiwwqYxwIgXvKBs2F8wqMtcRMxSfDkpQa3kYQha5jfnsR5l0lmTqo=\",\"Payload\":{\"body\":\"eyJhcGlWZXJzaW9uIjoiMC4wLjEiLCJraW5kIjoiaGFzaGVkcmVrb3JkIiwic3BlYyI6eyJkYXRhIjp7Imhhc2giOnsiYWxnb3JpdGhtIjoic2hhMjU2IiwidmFsdWUiOiI1ZDI5N2ExZTZmNWRkYTkwYjRjY2Q5N2QwNGU2MGVjMmM3MzdhYmE5MjI3ODFhNWRjZWRmNWUzM2I1NDE3YTY4In19LCJzaWduYXR1cmUiOnsiY29udGVudCI6Ik1FVUNJUUNRWHo2enJnYWJYQmVXVEJiWWI2RUYvcmcyT2JwZEtDcXhaZlZzbzlkVmRnSWdadkxMRTFreG9CaVovVzlhd2VIYVdmYTduN0VSaTlZdUhkVHlScUp5NzU4PSIsInB1YmxpY0tleSI6eyJjb250ZW50IjoiTFMwdExTMUNSVWRKVGlCUVZVSk1TVU1nUzBWWkxTMHRMUzBLVFVacmQwVjNXVWhMYjFwSmVtb3dRMEZSV1VsTGIxcEplbW93UkVGUlkwUlJaMEZGUmpKcE9GcGxNakZXTmtnd0t6aHZPRTE2Wnk5c1dtdHZZekkwWlFwWWJXNXVUWFF5ZDFBMU9YSkxNMWRGVTI5QlNtWTBhazFHZWpKRmNVVmpkRE1yTkdVMmIxTnZXalJTVGpWa09GcHFlbHBsVTJ4aFZUWjNQVDBLTFMwdExTMUZUa1FnVUZWQ1RFbERJRXRGV1MwdExTMHRDZz09In19fX0=\",\"integratedTime\":1709836174,\"logIndex\":76402918,\"logID\":\"c0d23d6ad406973f9559f3ba2d1ca01f84147d8ffc5b8445c224f98b9591801d\"}}"
[2024-03-11T16:01:11Z WARN  kbs_protocol::client::rcar_client] Authenticating with KBS failed. Perform a new RCAR handshake: ErrorInformation {
        error_type: "https://github.com/confidential-containers/kbs/errors/InvalidRequest",
        detail: "The request is invalid: parse Authorization header failed: invalid Header provided",
    }
[2024-03-11T16:01:40Z INFO  enclave_agent::services::images] Pull image "docker.io/niteeshkd/occlum-test:enc2" successfully
[2024-03-11T16:01:41Z INFO  enclave_agent::services::images] Pulling "docker.io/niteeshkd/occlum-test:enc2"
[2024-03-11T16:01:41Z INFO  image_rs::resource::kbs] secure channel uses native-aa
[2024-03-11T16:01:56Z INFO  enclave_agent::services::images] Pulling "docker.io/niteeshkd/occlum-test:enc2"
[2024-03-11T16:01:56Z INFO  image_rs::resource::kbs] secure channel uses native-aa
[2024-03-11T16:02:22Z INFO  enclave_agent::services::images] Pulling "docker.io/niteeshkd/occlum-test:enc2"
[2024-03-11T16:02:22Z INFO  image_rs::resource::kbs] secure channel uses native-aa
[2024-03-11T16:03:03Z INFO  enclave_agent::services::images] Pulling "docker.io/niteeshkd/occlum-test:enc2"
[2024-03-11T16:03:03Z INFO  image_rs::resource::kbs] secure channel uses native-aa
[2024-03-11T16:04:32Z INFO  enclave_agent::services::images] Pulling "docker.io/niteeshkd/occlum-test:enc2"
[2024-03-11T16:04:32Z INFO  image_rs::resource::kbs] secure channel uses native-aa
[2024-03-11T16:07:24Z INFO  enclave_agent::services::images] Pulling "docker.io/niteeshkd/occlum-test:enc2"
[2024-03-11T16:07:24Z INFO  image_rs::resource::kbs] secure channel uses native-aa

Any suggestion?

@Xynnn007

niteeshkd commented 4 months ago

The above problem also appears while launching enclave-cc in SIM mode using the operator/tests/e2e/enclave-cc-pod-sim.yaml.

$ kubectl apply -f tests/e2e/enclave-cc-pod-sim.yaml

$ kubectl describe pod enclave-cc-pod-sim
...
      Message:      failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: exec: "/run/rune/boot_instance/build/bin/occlum-run": stat /run/rune/boot_instance/build/bin/occlum-run: no such file or directory: unknown
...
Events:
  Type     Reason     Age                    From               Message
  ----     ------     ----                   ----               -------
  Normal   Scheduled  6m42s                  default-scheduler  Successfully assigned default/enclave-cc-pod-sim to b77r44u11-node
  Normal   Pulled     6m38s                  kubelet            Successfully pulled image "docker.io/huaijin20191223/scratch-base:v1.8" in 4.169s (4.169s including waiting)
  Normal   Created    6m38s                  kubelet            Created container hello-world
  Warning  Failed     6m37s                  kubelet            Error: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: exec: "/run/rune/boot_instance/build/bin/occlum-run": stat /run/rune/boot_instance/build/bin/occlum-run: no such file or directory: unknown
  Warning  Failed     5m45s (x3 over 6m37s)  kubelet            Failed to pull image "docker.io/huaijin20191223/scratch-base:v1.8": rpc error: code = Internal desc = failed to mount "sefs" to "/run/enclave-cc/containers/scratch-base_v1.8/rootfs", with error: EIO: I/O error
  Warning  Failed     5m45s (x3 over 6m37s)  kubelet            Error: ErrImagePull
  Normal   BackOff    5m11s (x2 over 6m11s)  kubelet            Back-off pulling image "docker.io/huaijin20191223/scratch-base:v1.8"
  Warning  Failed     5m11s (x2 over 6m11s)  kubelet            Error: ImagePullBackOff
  Warning  BackOff    4m14s (x8 over 6m36s)  kubelet            Back-off restarting failed container hello-world in pod enclave-cc-pod-sim_default(c6bd97f7-a71f-4df3-976c-1c54a0ccb40e)
  Normal   Pulling    91s (x6 over 6m42s)    kubelet            Pulling image "docker.io/huaijin20191223/scratch-base:v1.8"
mythi commented 4 months ago

The latest runtime bundle does not have "boot_instance" anymore. Try updating that to "occlum_instance" in your pod yaml

niteeshkd commented 4 months ago

After replacing "boot_instance" by "occlum_instance" in pod yaml file, the error stat /run/rune/boot_instance/build/bin/occlum-run: no such file or directory disappeared but the other error failed to mount \"sefs\" to \"/run/enclave-cc/containers/scratch-base_v1.8/rootfs\", with error: EIO: I/O error" is still there.

$ kubectl apply -f tests/e2e/enclave-cc-pod-sim.yaml

$ kubectl describe pod enclave-cc-pod-sim
...
Events:
  Type     Reason     Age                    From               Message
  ----     ------     ----                   ----               -------
  Normal   Scheduled  7m2s                   default-scheduler  Successfully assigned default/enclave-cc-pod-sim to b77r44u11-node
  Normal   Pulled     6m57s                  kubelet            Successfully pulled image "docker.io/huaijin20191223/scratch-base:v1.8" in 4.81s (4.81s including waiting)
  Normal   Created    6m57s                  kubelet            Created container hello-world
  Normal   Started    6m57s                  kubelet            Started container hello-world
  Warning  Failed     6m5s (x3 over 6m56s)   kubelet            Failed to pull image "docker.io/huaijin20191223/scratch-base:v1.8": rpc error: code = Internal desc = failed to mount "sefs" to "/run/enclave-cc/containers/scratch-base_v1.8/rootfs", with error: EIO: I/O error
  Warning  Failed     6m5s (x3 over 6m56s)   kubelet            Error: ErrImagePull
  Normal   BackOff    5m25s (x2 over 6m32s)  kubelet            Back-off pulling image "docker.io/huaijin20191223/scratch-base:v1.8"
  Warning  Failed     5m25s (x2 over 6m32s)  kubelet            Error: ImagePullBackOff
  Warning  BackOff    4m33s (x8 over 6m55s)  kubelet            Back-off restarting failed container hello-world in pod enclave-cc-pod-sim_default(b50cd147-7d21-44de-9ef5-4436de2cf1fe)
  Normal   Pulling    113s (x6 over 7m2s)    kubelet            Pulling image "docker.io/huaijin20191223/scratch-base:v1.8"
mythi commented 4 months ago

@niteeshkd can you share the agent log too

niteeshkd commented 4 months ago

@mythi did you mean the content of /run/containerd/agent-enclave/<cid>stderr ? It is here.

$ sudo cat /run/containerd/agent-enclave/428de9f548296340f18cba718d8441a87de54c28007ac98c78f58de6f4fbd0a5/stderr
[2024-03-11T19:43:08Z INFO  enclave_agent] ttRPC server started: "tcp://127.0.0.1:7788"
[2024-03-11T19:43:08Z INFO  enclave_agent::services::images] Pulling "docker.io/huaijin20191223/scratch-base:v1.8"
[2024-03-11T19:43:09Z INFO  enclave_agent::services::images] Pull image "docker.io/huaijin20191223/scratch-base:v1.8" successfully
[2024-03-11T19:43:10Z INFO  enclave_agent::services::images] Pulling "docker.io/huaijin20191223/scratch-base:v1.8"
[2024-03-11T19:43:23Z INFO  enclave_agent::services::images] Pulling "docker.io/huaijin20191223/scratch-base:v1.8"
[2024-03-11T19:44:01Z INFO  enclave_agent::services::images] Pulling "docker.io/huaijin20191223/scratch-base:v1.8"
[2024-03-11T19:45:58Z INFO  enclave_agent::services::images] Pulling "docker.io/huaijin20191223/scratch-base:v1.8"
mythi commented 4 months ago

There's probably something missing on the operator side we haven't updated since moving to the new unified bundle in this repo. Is it possible to retry so that the images downloaded by enclave agent are removed?

niteeshkd commented 4 months ago

I re-deployed the operator after cloning it fresh and modifying the following files.

$ git diff
diff --git a/config/samples/enclave-cc/base/ccruntime-enclave-cc.yaml b/config/samples/enclave-cc/base/ccruntime-enclave-cc.yaml
index 1fbc5cd..cc42e03 100644
--- a/config/samples/enclave-cc/base/ccruntime-enclave-cc.yaml
+++ b/config/samples/enclave-cc/base/ccruntime-enclave-cc.yaml
@@ -9,7 +9,7 @@ spec:
       node.kubernetes.io/worker: ""
   config:
     installType: bundle
-    payloadImage: quay.io/confidential-containers/runtime-payload:enclave-cc-HW-cc-kbc-v0.8.0
+    payloadImage: quay.io/confidential-containers/runtime-payload-ci:enclave-cc-HW-cc-kbc-latest
     installDoneLabel:
       confidentialcontainers.org/enclave-cc: "true"
     uninstallDoneLabel:
@@ -161,6 +161,7 @@ spec:
       - name: "CONFIGURE_CC"
         value: "yes"
       - name: "DECRYPT_CONFIG"
-        value: "ewogICAgImtleV9wcm92aWRlciI6ICJwcm92aWRlcjphdHRlc3RhdGlvbi1hZ2VudDpzYW1wbGVfa2JjOjoxMjcuMC4wLjE6NTAwMDAiLAogICAgInNlY3VyaXR5X3ZhbGlkYXRlIjogZmFsc2UKfQo="
+        #value: "ewogICAgImtleV9wcm92aWRlciI6ICJwcm92aWRlcjphdHRlc3RhdGlvbi1hZ2VudDpzYW1wbGVfa2JjOjoxMjcuMC4wLjE6NTAwMDAiLAogICAgInNlY3VyaXR5X3ZhbGlkYXRlIjogZmFsc2UKfQo="
+        value: "ewogICAgImtleV9wcm92aWRlciI6ICJwcm92aWRlcjphdHRlc3RhdGlvbi1hZ2VudDpjY19rYmM6Omh0dHA6Ly8wLjAuMC4wOjgwODAiLAogICAgInNlY3VyaXR5X3ZhbGlkYXRlIjogZmFsc2UgCn0K"
       - name: "OCICRYPT_CONFIG"
         value: "ewogICAgImtleS1wcm92aWRlcnMiOiB7CiAgICAgICAgImF0dGVzdGF0aW9uLWFnZW50IjogewogICAgICAgICAgICAibmF0aXZlIjogImF0dGVzdGF0aW9uLWFnZW50IgogICAgICAgIH0KICAgIH0KfQo="
diff --git a/config/samples/enclave-cc/sim/kustomization.yaml b/config/samples/enclave-cc/sim/kustomization.yaml
index 932528c..4b58e1a 100644
--- a/config/samples/enclave-cc/sim/kustomization.yaml
+++ b/config/samples/enclave-cc/sim/kustomization.yaml
@@ -4,7 +4,7 @@ resources:
 nameSuffix: -sgx-mode-sim

 images:
-- name: quay.io/confidential-containers/runtime-payload
-  newTag: enclave-cc-SIM-sample-kbc-v0.8.0
+- name: quay.io/confidential-containers/runtime-payload-ci
+  newTag: enclave-cc-SIM-sample-kbc-latest
 - name: quay.io/confidential-containers/reqs-payload
   newTag: latest
diff --git a/tests/e2e/enclave-cc-pod-sim.yaml b/tests/e2e/enclave-cc-pod-sim.yaml
index 7749eed..18ce321 100644
--- a/tests/e2e/enclave-cc-pod-sim.yaml
+++ b/tests/e2e/enclave-cc-pod-sim.yaml
@@ -13,5 +13,5 @@ spec:
       value: "0"
     workingDir: "/run/rune/boot_instance/"
     command:
-    - /run/rune/boot_instance/build/bin/occlum-run
+    - /run/rune/occlum_instance/build/bin/occlum-run
     - /bin/hello_world

Then, i retried launching the enclave-cc in SIM mode. I still see the same errors.

$ kubectl apply -f tests/e2e/enclave-cc-pod-sim.yaml
$ kubectl describe pod enclave-cc-pod-sim
...
Events:
  Type     Reason     Age                   From               Message
  ----     ------     ----                  ----               -------
  Normal   Scheduled  5m47s                 default-scheduler  Successfully assigned default/enclave-cc-pod-sim to b77r44u11-node
  Normal   Pulled     5m40s                 kubelet            Successfully pulled image "docker.io/huaijin20191223/scratch-base:v1.8" in 6.58s (6.58s including waiting)
  Normal   Created    5m40s                 kubelet            Created container hello-world
  Normal   Started    5m39s                 kubelet            Started container hello-world
  Normal   BackOff    5m11s                 kubelet            Back-off pulling image "docker.io/huaijin20191223/scratch-base:v1.8"
  Warning  Failed     5m11s                 kubelet            Error: ImagePullBackOff
  Normal   Pulling    4m8s (x5 over 5m46s)  kubelet            Pulling image "docker.io/huaijin20191223/scratch-base:v1.8"
  Warning  Failed     4m8s (x4 over 5m38s)  kubelet            Failed to pull image "docker.io/huaijin20191223/scratch-base:v1.8": rpc error: code = Internal desc = failed to mount "sefs" to "/run/enclave-cc/containers/scratch-base_v1.8/rootfs", with error: EIO: I/O error
  Warning  Failed     4m8s (x4 over 5m38s)  kubelet            Error: ErrImagePull
  Warning  BackOff    44s (x19 over 5m37s)  kubelet            Back-off restarting failed container hello-world in pod enclave-cc-pod-sim_default(46ce8bba-1e05-4668-9f3d-3b989fa0b592)

$ sudo cat /run/containerd/agent-enclave/25c5da12bb20face9a56fd0c58944bfaac3b82601af57c34dcb22875ba65e4ce/stderr
[2024-03-12T17:56:59Z INFO  enclave_agent] ttRPC server started: "tcp://127.0.0.1:7788"
[2024-03-12T17:56:59Z INFO  enclave_agent::services::images] Pulling "docker.io/huaijin20191223/scratch-base:v1.8"
[2024-03-12T17:56:59Z INFO  enclave_agent::services::images] Pull image "docker.io/huaijin20191223/scratch-base:v1.8" successfully
[2024-03-12T17:57:01Z INFO  enclave_agent::services::images] Pulling "docker.io/huaijin20191223/scratch-base:v1.8"
[2024-03-12T17:57:13Z INFO  enclave_agent::services::images] Pulling "docker.io/huaijin20191223/scratch-base:v1.8"
mythi commented 4 months ago

Looks like workingDir is not updated in the yaml?

niteeshkd commented 4 months ago

You are right. That was my mistake. I did not notice that. After correcting it, i am able to launch enclave-cc in SIM mode which is a good news.

When I try to launch encalve-cc with an encrypted image in HW mode, it fails with the similar message. I tried redeploying operator but that did not help.

$ cat enclave-cc-pod-encrypted.yaml
apiVersion: v1
kind: Pod
metadata:
  name: enclave-cc-pod
spec:
  containers:
  #- image: ghcr.io/confidential-containers/test-container-enclave-cc:encrypted 
  #- image: docker.io/niteeshkd/occlum-hello-world:enc
  - image: docker.io/niteeshkd/occlum-test:enc2
    imagePullPolicy: IfNotPresent
    name: hello-world
    resources:
      limits:
        sgx.intel.com/epc: 256Mi
    env:
    - name: OCCLUM_RELEASE_ENCLAVE
      value: "1"
    workingDir: "/run/rune/occlum_instance/"
    command:
    - /run/rune/occlum_instance/build/bin/occlum-run
    - /bin/hello_world
  runtimeClassName: enclave-cc

  $ kubectl apply -f ./enclave-cc-pod-encrypted.yaml

  $ kubectl describe pod enclave-cc-pod
  ...
  Events:
  Type     Reason     Age                From               Message
  ----     ------     ----               ----               -------
  Normal   Scheduled  81s                default-scheduler  Successfully assigned default/enclave-cc-pod to b77r44u11-node
  Normal   Pulled     36s                kubelet            Successfully pulled image "docker.io/niteeshkd/occlum-test:enc2" in 43.662s (43.662s including waiting)
  Normal   Created    36s                kubelet            Created container hello-world
  Normal   Started    36s                kubelet            Started container hello-world
  Normal   Pulling    13s (x3 over 80s)  kubelet            Pulling image "docker.io/niteeshkd/occlum-test:enc2"
  Warning  Failed     12s (x2 over 23s)  kubelet            Failed to pull image "docker.io/niteeshkd/occlum-test:enc2": rpc error: code = Internal desc = failed to mount "sefs" to "/run/enclave-cc/containers/occlum-test_enc2/rootfs", with error: EIO: I/O error
  Warning  Failed     12s (x2 over 23s)  kubelet            Error: ErrImagePull
  Normal   BackOff    1s                 kubelet            Back-off pulling image "docker.io/niteeshkd/occlum-test:enc2"
  Warning  Failed     1s                 kubelet            Error: ImagePullBackOff

$ sudo cat /run/containerd/agent-enclave/abf30082d10898dd4a033864f5d6a3fce9080aa930f60ae53c0d744c44fef1a5/stderr
[2024-03-12T20:57:03Z INFO  enclave_agent] ttRPC server started: "tcp://127.0.0.1:7788"
[2024-03-12T20:57:03Z INFO  enclave_agent::services::images] Pulling "docker.io/niteeshkd/occlum-test:enc2"
[2024-03-12T20:57:04Z INFO  image_rs::resource::kbs] secure channel uses native-aa
[2024-03-12T20:57:04Z WARN  kbs_protocol::client::rcar_client] Authenticating with KBS failed. Perform a new RCAR handshake: ErrorInformation {
        error_type: "https://github.com/confidential-containers/kbs/errors/InvalidRequest",
        detail: "The request is invalid: parse Authorization header failed: invalid Header provided",
    }
[2024-03-12T20:57:05Z INFO  sigstore::cosign::client_builder] Rekor public key not provided. Rekor integration disabled
[2024-03-12T20:57:05Z INFO  sigstore::cosign::client_builder] No Fulcio cert has been provided. Fulcio integration disabled
[2024-03-12T20:57:05Z INFO  sigstore::cosign::signature_layers] Ignoring bundle, rekor public key not provided to verification client bundle="{\"SignedEntryTimestamp\":\"MEUCIQC4lVD0V5TL6h0SPaqQe6RVGLW2YvnpqtYvvbWiwwqYxwIgXvKBs2F8wqMtcRMxSfDkpQa3kYQha5jfnsR5l0lmTqo=\",\"Payload\":{\"body\":\"eyJhcGlWZXJzaW9uIjoiMC4wLjEiLCJraW5kIjoiaGFzaGVkcmVrb3JkIiwic3BlYyI6eyJkYXRhIjp7Imhhc2giOnsiYWxnb3JpdGhtIjoic2hhMjU2IiwidmFsdWUiOiI1ZDI5N2ExZTZmNWRkYTkwYjRjY2Q5N2QwNGU2MGVjMmM3MzdhYmE5MjI3ODFhNWRjZWRmNWUzM2I1NDE3YTY4In19LCJzaWduYXR1cmUiOnsiY29udGVudCI6Ik1FVUNJUUNRWHo2enJnYWJYQmVXVEJiWWI2RUYvcmcyT2JwZEtDcXhaZlZzbzlkVmRnSWdadkxMRTFreG9CaVovVzlhd2VIYVdmYTduN0VSaTlZdUhkVHlScUp5NzU4PSIsInB1YmxpY0tleSI6eyJjb250ZW50IjoiTFMwdExTMUNSVWRKVGlCUVZVSk1TVU1nUzBWWkxTMHRMUzBLVFVacmQwVjNXVWhMYjFwSmVtb3dRMEZSV1VsTGIxcEplbW93UkVGUlkwUlJaMEZGUmpKcE9GcGxNakZXTmtnd0t6aHZPRTE2Wnk5c1dtdHZZekkwWlFwWWJXNXVUWFF5ZDFBMU9YSkxNMWRGVTI5QlNtWTBhazFHZWpKRmNVVmpkRE1yTkdVMmIxTnZXalJTVGpWa09GcHFlbHBsVTJ4aFZUWjNQVDBLTFMwdExTMUZUa1FnVUZWQ1RFbERJRXRGV1MwdExTMHRDZz09In19fX0=\",\"integratedTime\":1709836174,\"logIndex\":76402918,\"logID\":\"c0d23d6ad406973f9559f3ba2d1ca01f84147d8ffc5b8445c224f98b9591801d\"}}"
[2024-03-12T20:57:06Z WARN  kbs_protocol::client::rcar_client] Authenticating with KBS failed. Perform a new RCAR handshake: ErrorInformation {
        error_type: "https://github.com/confidential-containers/kbs/errors/InvalidRequest",
        detail: "The request is invalid: parse Authorization header failed: invalid Header provided",
    }
[2024-03-12T20:57:35Z INFO  enclave_agent::services::images] Pull image "docker.io/niteeshkd/occlum-test:enc2" successfully
[2024-03-12T20:57:47Z INFO  enclave_agent::services::images] Pulling "docker.io/niteeshkd/occlum-test:enc2"
[2024-03-12T20:57:47Z INFO  image_rs::resource::kbs] secure channel uses native-aa
[2024-03-12T20:57:58Z INFO  enclave_agent::services::images] Pulling "docker.io/niteeshkd/occlum-test:enc2"
[2024-03-12T20:57:58Z INFO  image_rs::resource::kbs] secure channel uses native-aa
[2024-03-12T20:58:34Z INFO  enclave_agent::services::images] Pulling "docker.io/niteeshkd/occlum-test:enc2"
[2024-03-12T20:58:34Z INFO  image_rs::resource::kbs] secure channel uses native-aa

$ docker logs trustee-as-1                                                                                                              
[2024-03-12T20:56:02Z INFO  grpc_as] CoCo AS:                                                                                           
    v0.1.0                                                                                                                              
    commit:                                                                                                                             
    buildtime: 2024-03-06 18:40:46 +00:00                                                                                               
[2024-03-12T20:56:02Z INFO  grpc_as::grpc] Listen socket: 0.0.0.0:50004                                                                 
[2024-03-12T20:56:02Z INFO  attestation_service::rvps] connect to remote RVPS: http://rvps:50003                                        
[2024-03-12T20:56:02Z INFO  attestation_service::token::simple] No Token Signer key in config file, create an ephemeral key              and without CA pubkey cert                                                                                                             
[2024-03-12T20:57:04Z WARN  verifier] The input REPORT_DATA of SGX is shorter than 64 bytes, will be padded with '\0'.                  
[2024-03-12T20:57:04Z INFO  verifier::sgx::claims]                                                                                      
    Parsed Evidence claims map:                                                                                                         
    {"body": Object {"attributes.flags": String("0500000000000000"), "attributes.xfrm": String("e700000000000000"), "config_id": String("0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000 0000000000000000000"), "config_svn": String("0000"), "cpu_svn": String("0606131503ff00040000000000000000"), "isv_ext_prod_i d": String("00000000000000000000000000000000"), "isv_family_id": String("00000000000000000000000000000000"), "isv_prod_id":  String("0000"), "isv_svn": String("0000"), "misc_select": String("00000000"), "mr_enclave": String("33789b6aca278dbe3a5442 cf3fe4ea530d7d9560d743ca6bd50729d10f7d4ec6"), "mr_signer": String("83d719e77deaca1470f6baf62a4d774303c899db69020f9c70ee1dfc08c7ce9e"), "report_data": String("48b12112e96c50e1040f9c745b7df078a04fd3ec05264ebd0093dd638c2a85edca1c47878a19b668c525d029fb5eb5d200000000000000000000000000000000"), "reserved1": String("000000000000000000000000"), "reserved2": String("0000000000000000000000000000000000000000000000000000000000000000"), "reserved3": String("0000000000000000000000000000000000000000000000000000000000000000"), "reserved4": String("000000000000000000000000000000000000000000000000000000000000000000000000000000000000")}, "header": Object {"att_key_data_0": String("00000000"), "att_key_type": String("0200"), "pce_svn": String("0f00"), "qe_svn": String("0a00"), "user_data": String("c2001edae03836fe7b2d57a05245d94f00000000"), "vendor_id": String("939a7233f79c4ca9940a0db3957f0607"), "version": String("0300")}, "init_data": String("00000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000"), "report_data": String("48b12112e96c50e1040f9c745b7df078a04fd3ec05264ebd0093dd638c2a85edca1c47878a19b668c525d029fb5eb5d200000000000000000000000000000000")}

$ docker logs trustee-kbs-1
[2024-03-12T20:56:02Z INFO  kbs] Using config file /etc/kbs-config.toml
[2024-03-12T20:56:02Z INFO  api_server::attestation::coco::grpc] Default AS connection pool size (100) is used
[2024-03-12T20:56:02Z INFO  api_server::attestation::coco::grpc] connect to remote AS [http://as:50004] with pool size 100
[2024-03-12T20:56:02Z INFO  api_server] Starting HTTP server at [0.0.0.0:8080]
[2024-03-12T20:56:02Z INFO  actix_server::builder] starting 192 workers
[2024-03-12T20:56:02Z INFO  actix_server::server] Tokio runtime found; starting in existing Tokio runtime
[2024-03-12T20:57:04Z INFO  api_server::http::resource] Get pkey from auth header
[2024-03-12T20:57:04Z INFO  actix_web::middleware::logger] 172.18.0.1 "GET /kbs/v0/resource/default/security-policy/test HTTP/1.1" 401 173 "-" "attestation-agent-kbs-client/0.1.0" 0.000294
[2024-03-12T20:57:04Z INFO  api_server::http::attest] request: Json(Request { version: "0.1.0", tee: Sgx, extra_params: "" })
[2024-03-12T20:57:04Z INFO  actix_web::middleware::logger] 172.18.0.1 "POST /kbs/v0/auth HTTP/1.1" 200 74 "-" "attestation-agent-kbs-client/0.1.0" 0.000230
[2024-03-12T20:57:04Z INFO  api_server::http::attest] Cookie 2b17aa83a18645f189218e60bd9b1378 attestation Json(Attestation { tee_pubkey: TeePubKey { kty: "RSA", alg: "RSA1_5", k_mod: "y19Ke-OzcVrOdULIedUxtGkrzKOAAtFU1yIl6aMmrXrbC2DeHATMMC6yhvGs4oQQZBCsluKdnsERI_MTfli5IKFqDHTBkLF3hcwGhc26FdVu3w-7HyVtHyVjBeb3k_2yDnsEcUHKeIX3OHS0ry1kCNWgwMf4fpuQOahKwl9goODuysm1v3of06tLoYLsec-ARIij6zb0Y5nAvH7M3pYV2IjzD8GOC7GvSsRaQmFCjaQDKo2Rr6NTgb_lYRArEeT3nX1m8UHtGwAeWD7hL3fgkJHHlh7ooNOMAQW5IA9Z0leXoAfFm8VvkBhvIO4ezxRpt2gp6f74E14NYQ7adCKHTQ", k_exp: "AQAB" }, tee_evidence: "{\"quote\":\"AwACAAAAAAAKAA8Ak5pyM/ecTKmUCg2zlX8GB8IAHtrgODb+ey1XoFJF2U8AAAAABgYTFQP/AAQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABQAAAAAAAADnAAAAAAAAADN4m2rKJ42+OlRCzz/k6lMNfZVg10PKa9UHKdEPfU7GAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACD1xnnferKFHD2uvYqTXdDA8iZ22kCD5xw7h38CMfOngAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABIsSES6WxQ4QQPnHRbffB4oE/T7AUmTr0Ak91jjCqF7cocR4eKGbZoxSXQKftetdIAAAAAAAAAAAAAAAAAAAAAxhAAAKTZzI0BrcLHfz4u1dAkYIn2B/B6TG/HXfwPdun+7+Yp0IKVBATHXSg7Uj9mrKQV8IIqqh6R5+3tbI/nw7JZfbxnVXWRICkw/EBK9eIRuBAgWV8mKwhn/RwGqTL5oh9YXRrWnpG3zdvh1WE5Q49e/XSqz1k/1N27b1pblP4slNnpBgYTFQP/AAQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAFQAAAAAAAADnAAAAAAAAAJazR6ZOWgReJzacJubc2lH9fIUOmzo6eecY9DJh3uHkAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAACMT1d115ZQPpYTf3fGioKaAFasje1wFAsIGwlEkMV7/wAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAEACgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAOm5zz++QgOYQzZNRsGhwWLz4jzkEbL2F6R7SjN+PWAwAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAByHTeyEEdCsB8dRZd6sRHHVGgikHf2VtNpKqE3jF3w89A6x3mOHPGCi9ZnmhHbO9lN+IUVixKA724d/kkNFLnCAAAAECAwQFBgcICQoLDA0ODxAREhMUFRYXGBkaGxwdHh8FAF4OAAAtLS0tLUJFR0lOIENFUlRJRklDQVRFLS0tLS0KTUlJRTd6Q0NCSmFnQXdJQkFnSVVKSjYyamJ2M2lXQitIWUk0b0N1aTJ3dmNQTmd3Q2dZSUtvWkl6ajBFQXdJdwpjREVpTUNBR0ExVUVBd3daU1c1MFpXd2dVMGRZSUZCRFN5QlFiR0YwWm05eWJTQkRRVEVhTUJnR0ExVUVDZ3dSClNXNTBaV3dnUTI5eWNHOXlZWFJwYjI0eEZEQVNCZ05WQkFjTUMxTmhiblJoSUVOc1lYSmhNUXN3Q1FZRFZRUUkKREFKRFFURUxNQWtHQTFVRUJoTUNWVk13SGhjTk1qTXhNakF4TURRd01ERXpXaGNOTXpBeE1qQXhNRFF3TURFegpXakJ3TVNJd0lBWURWUVFEREJsSmJuUmxiQ0JUUjFnZ1VFTkxJRU5sY25ScFptbGpZWFJsTVJvd0dBWURWUVFLCkRCRkpiblJsYkNCRGIzSndiM0poZEdsdmJqRVVNQklHQTFVRUJ3d0xVMkZ1ZEdFZ1EyeGhjbUV4Q3pBSkJnTlYKQkFnTUFrTkJNUXN3Q1FZRFZRUUdFd0pWVXpCWk1CTUdCeXFHU000OUFnRUdDQ3FHU000OUF3RUhBMElBQkZsOQpZbm5QcHkrbWVsaTFUOTd6NnhWdlIrMGpIYnQ3L1JZN0lwS3piWlRJMHprbHlueG1yeWtQcnZ0aGk0K1hwZlZsCjR3c05qMXA4cjVwTUVWc25kaVNqZ2dNTU1JSURDREFmQmdOVkhTTUVHREFXZ0JTVmIxM052UnZoNlVCSnlkVDAKTTg0QlZ3dmVWREJyQmdOVkhSOEVaREJpTUdDZ1hxQmNobHBvZEhSd2N6b3ZMMkZ3YVM1MGNuVnpkR1ZrYzJWeQpkbWxqWlhNdWFXNTBaV3d1WTI5dEwzTm5lQzlqWlhKMGFXWnBZMkYwYVc5dUwzWTBMM0JqYTJOeWJEOWpZVDF3CmJHRjBabTl5YlNabGJtTnZaR2x1Wnoxa1pYSXdIUVlEVlIwT0JCWUVGRUZvNVpyRmp6bmlTdGI4aFNMT3V0UHEKbGZreU1BNEdBMVVkRHdFQi93UUVBd0lHd0RBTUJnTlZIUk1CQWY4RUFqQUFNSUlDT1FZSktvWklodmhOQVEwQgpCSUlDS2pDQ0FpWXdIZ1lLS29aSWh2aE5BUTBCQVFRUVVjOXRBQnVoUStpT1VYdkRDcXU3ZERDQ0FXTUdDaXFHClNJYjRUUUVOQVFJd2dnRlRNQkFHQ3lxR1NJYjRUUUVOQVFJQkFnRUdNQkFHQ3lxR1NJYjRUUUVOQVFJQ0FnRUcKTUJBR0N5cUdTSWI0VFFFTkFRSURBZ0VDTUJBR0N5cUdTSWI0VFFFTkFRSUVBZ0VDTUJBR0N5cUdTSWI0VFFFTgpBUUlGQWdFRE1CQUdDeXFHU0liNFRRRU5BUUlHQWdFQk1CQUdDeXFHU0liNFRRRU5BUUlIQWdFQU1CQUdDeXFHClNJYjRUUUVOQVFJSUFnRURNQkFHQ3lxR1NJYjRUUUVOQVFJSkFnRUFNQkFHQ3lxR1NJYjRUUUVOQVFJS0FnRUEKTUJBR0N5cUdTSWI0VFFFTkFRSUxBZ0VBTUJBR0N5cUdTSWI0VFFFTkFRSU1BZ0VBTUJBR0N5cUdTSWI0VFFFTgpBUUlOQWdFQU1CQUdDeXFHU0liNFRRRU5BUUlPQWdFQU1CQUdDeXFHU0liNFRRRU5BUUlQQWdFQU1CQUdDeXFHClNJYjRUUUVOQVFJUUFnRUFNQkFHQ3lxR1NJYjRUUUVOQVFJUkFnRUxNQjhHQ3lxR1NJYjRUUUVOQVFJU0JCQUcKQmdJQ0F3RUFBd0FBQUFBQUFBQUFNQkFHQ2lxR1NJYjRUUUVOQVFNRUFnQUFNQlFHQ2lxR1NJYjRUUUVOQVFRRQpCZ0NBYndVQUFEQVBCZ29xaGtpRytFMEJEUUVGQ2dFQk1CNEdDaXFHU0liNFRRRU5BUVlFRU55ODdCOWJRckExCjh2RWdMYWEzK1lzd1JBWUtLb1pJaHZoTkFRMEJCekEyTUJBR0N5cUdTSWI0VFFFTkFRY0JBUUgvTUJBR0N5cUcKU0liNFRRRU5BUWNDQVFIL01CQUdDeXFHU0liNFRRRU5BUWNEQVFIL01Bb0dDQ3FHU000OUJBTUNBMGNBTUVRQwpJQnFGVnlvNHFydnRWTFBJN1hRR3ZUZ202NWx5MkRGSFlKSTJPd2wxdnpuNkFpQm1OM2UvYWZHWTRVTEVheXhBCnV0T2NIaUxjYjVFQm51cDdhNldteHpnd1BBPT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQotLS0tLUJFR0lOIENFUlRJRklDQVRFLS0tLS0KTUlJQ2xqQ0NBajJnQXdJQkFnSVZBSlZ2WGMyOUcrSHBRRW5KMVBRenpnRlhDOTVVTUFvR0NDcUdTTTQ5QkFNQwpNR2d4R2pBWUJnTlZCQU1NRVVsdWRHVnNJRk5IV0NCU2IyOTBJRU5CTVJvd0dBWURWUVFLREJGSmJuUmxiQ0JECmIzSndiM0poZEdsdmJqRVVNQklHQTFVRUJ3d0xVMkZ1ZEdFZ1EyeGhjbUV4Q3pBSkJnTlZCQWdNQWtOQk1Rc3cKQ1FZRFZRUUdFd0pWVXpBZUZ3MHhPREExTWpFeE1EVXdNVEJhRncwek16QTFNakV4TURVd01UQmFNSEF4SWpBZwpCZ05WQkFNTUdVbHVkR1ZzSUZOSFdDQlFRMHNnVUd4aGRHWnZjbTBnUTBFeEdqQVlCZ05WQkFvTUVVbHVkR1ZzCklFTnZjbkJ2Y21GMGFXOXVNUlF3RWdZRFZRUUhEQXRUWVc1MFlTQkRiR0Z5WVRFTE1Ba0dBMVVFQ0F3Q1EwRXgKQ3pBSkJnTlZCQVlUQWxWVE1Ga3dFd1lIS29aSXpqMENBUVlJS29aSXpqMERBUWNEUWdBRU5TQi83dDIxbFhTTwoyQ3V6cHh3NzRlSkI3MkV5REdnVzVyWEN0eDJ0VlRMcTZoS2s2eitVaVJaQ25xUjdwc092Z3FGZVN4bG1UbEpsCmVUbWkyV1l6M3FPQnV6Q0J1REFmQmdOVkhTTUVHREFXZ0JRaVpReldXcDAwaWZPRHRKVlN2MUFiT1NjR3JEQlMKQmdOVkhSOEVTekJKTUVlZ1JhQkRoa0ZvZEhSd2N6b3ZMMk5sY25ScFptbGpZWFJsY3k1MGNuVnpkR1ZrYzJWeQpkbWxqWlhNdWFXNTBaV3d1WTI5dEwwbHVkR1ZzVTBkWVVtOXZkRU5CTG1SbGNqQWRCZ05WSFE0RUZnUVVsVzlkCnpiMGI0ZWxBU2NuVTlEUE9BVmNMM2xRd0RnWURWUjBQQVFIL0JBUURBZ0VHTUJJR0ExVWRFd0VCL3dRSU1BWUIKQWY4Q0FRQXdDZ1lJS29aSXpqMEVBd0lEUndBd1JBSWdYc1ZraTB3K2k2VllHVzNVRi8yMnVhWGUwWUpEajFVZQpuQStUakQxYWk1Y0NJQ1liMVNBbUQ1eGtmVFZwdm80VW95aVNZeHJEV0xtVVI0Q0k5Tkt5ZlBOKwotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCi0tLS0tQkVHSU4gQ0VSVElGSUNBVEUtLS0tLQpNSUlDanpDQ0FqU2dBd0lCQWdJVUltVU0xbHFkTkluemc3U1ZVcjlRR3prbkJxd3dDZ1lJS29aSXpqMEVBd0l3CmFERWFNQmdHQTFVRUF3d1JTVzUwWld3Z1UwZFlJRkp2YjNRZ1EwRXhHakFZQmdOVkJBb01FVWx1ZEdWc0lFTnYKY25CdmNtRjBhVzl1TVJRd0VnWURWUVFIREF0VFlXNTBZU0JEYkdGeVlURUxNQWtHQTFVRUNBd0NRMEV4Q3pBSgpCZ05WQkFZVEFsVlRNQjRYRFRFNE1EVXlNVEV3TkRVeE1Gb1hEVFE1TVRJek1USXpOVGsxT1Zvd2FERWFNQmdHCkExVUVBd3dSU1c1MFpXd2dVMGRZSUZKdmIzUWdRMEV4R2pBWUJnTlZCQW9NRVVsdWRHVnNJRU52Y25CdmNtRjAKYVc5dU1SUXdFZ1lEVlFRSERBdFRZVzUwWVNCRGJHRnlZVEVMTUFrR0ExVUVDQXdDUTBFeEN6QUpCZ05WQkFZVApBbFZUTUZrd0V3WUhLb1pJemowQ0FRWUlLb1pJemowREFRY0RRZ0FFQzZuRXdNRElZWk9qL2lQV3NDemFFS2k3CjFPaU9TTFJGaFdHamJuQlZKZlZua1k0dTNJamtEWVlMME14TzRtcXN5WWpsQmFsVFZZeEZQMnNKQks1emxLT0IKdXpDQnVEQWZCZ05WSFNNRUdEQVdnQlFpWlF6V1dwMDBpZk9EdEpWU3YxQWJPU2NHckRCU0JnTlZIUjhFU3pCSgpNRWVnUmFCRGhrRm9kSFJ3Y3pvdkwyTmxjblJwWm1sallYUmxjeTUwY25WemRHVmtjMlZ5ZG1salpYTXVhVzUwClpXd3VZMjl0TDBsdWRHVnNVMGRZVW05dmRFTkJMbVJsY2pBZEJnTlZIUTRFRmdRVUltVU0xbHFkTkluemc3U1YKVXI5UUd6a25CcXd3RGdZRFZSMFBBUUgvQkFRREFnRUdNQklHQTFVZEV3RUIvd1FJTUFZQkFmOENBUUV3Q2dZSQpLb1pJemowRUF3SURTUUF3UmdJaEFPVy81UWtSK1M5Q2lTRGNOb293THVQUkxzV0dmL1lpN0dTWDk0Qmd3VHdnCkFpRUE0SjBsckhvTXMrWG81by9zWDZPOVFXeEhSQXZaVUdPZFJRN2N2cVJYYXFJPQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCgA=\"}" })
...
Xynnn007 commented 4 months ago

@niteeshkd You need to add a file under the storage path of kbs. s.t. kbs-storage/default/security-policy/test

{
    "default": [{"type": "insecureAcceptAnything"}],
    "transports": {}
}
niteeshkd commented 4 months ago

@Xynnn007 I have added that file.

$ tree data/kbs-storage
data/kbs-storage
+-- default
    +-- cosign-key
    |   +-- 1
    +-- image-kek
    |   +-- 29e4566e-f7b8-494b-8ea6-8d09d41e6a2e
    +-- security-policy
        +-- test

$ cat data/kbs-storage/default/security-policy/test 
{
    "default": [{"type": "reject"}],
    "transports": {
        "docker": {
            "docker.io/niteeshkd/occlum-test:enc2": [
                {
                    "type": "sigstoreSigned",
                    "keyPath": "kbs:///default/cosign-key/1"
                }
            ]
        }
    }
}
Xynnn007 commented 4 months ago

@niteeshkd Could you show the whole log of kbs? If it is too long, please send me via slack. You can find me Ding in the channel https://cloud-native.slack.com/archives/C0496GDCBAR

niteeshkd commented 4 months ago

Just slacked . https://cloud-native.slack.com/archives/C0496GDCBAR/p1710299603786749

Xynnn007 commented 4 months ago

I checked the KBS side log, there is an entry

[2024-03-12T20:57:06Z INFO  api_server::http::resource] Resource description: ResourceDesc { repository_name: "default", resource_type: "image-kek", resource_tag: "29e4566e-f7b8-494b-8ea6-8d09d41e6a2e" }
[2024-03-12T20:57:06Z INFO  actix_web::middleware::logger] 172.18.0.1 "GET /kbs/v0/resource/default/image-kek/29e4566e-f7b8-494b-8ea6-8d09d41e6a2e HTTP/1.1" 200 530 "-" "attestation-agent-kbs-client/0.1.0" 0.003895

The 200 here http:ok status means that the image decryption key is retrieved by enclave-agent successfully. So the next steps would be downloading the encrypted tar and decrypt.

Thus we can see

[2024-03-12T20:57:35Z INFO  enclave_agent::services::images] Pull image "docker.io/niteeshkd/occlum-test:enc2" successfully

The image pulling is done. However I am confused why at 2024-03-12T20:57:47Z, s.t. only 12 seconds later it retries to pull image.

[2024-03-12T20:57:47Z INFO  enclave_agent::services::images] Pulling "docker.io/niteeshkd/occlum-test:enc2"

Seems that also a mount error

failed to mount "sefs" to "/run/enclave-cc/containers/occlum-test_enc2/rootfs", with error: EIO: I/O error

any ideas? @mythi

mythi commented 4 months ago

While the error looks different here, I was suggesting earlier to test with our image to make sure the image does not have issues.

niteeshkd commented 4 months ago

@mythi I tested with the image mentioned at ci-setup . It works fine.

The docker image (i.e. docker.io/niteeshkd/occlum-test:unenc ) which i built using occlum dev container works fine when i don't use attestation (i.e. "security_validate": false ).

But, when I use its encrypted image docker.io/niteeshkd/occlum-test:enc2 with attestatation (i.e. "security_validate": true ), then it does not work. I am creating the encrypted image using skopeo with trustee coco-keyprovider listening at port 50000. I use the following steps to encrypt it and register its key with trustee KBS.

$ cat ocicrypt.conf 
{
  "key-providers": {
    "attestation-agent": {
      "grpc": "127.0.0.1:50000"
}}}

$ OCICRYPT_KEYPROVIDER_CONFIG=ocicrypt.conf skopeo copy --insecure-policy --encryption-key provider:attestation-agent docker://docker.io/niteeshkd/occlum-test:unenc docker://docker.io/niteeshkd/occlum-test:enc2

I noticed that when i pull the encrypted docker image niteeshkd/occlum-test:enc2 , it complains as follows.

$ docker pull docker.io/niteeshkd/occlum-test:enc2
enc2: Pulling from niteeshkd/occlum-test
f53005d5071f: Extracting [==================================================>]  15.44MB/15.44MB
failed to register layer: archive/tar: invalid tar header

But, I don't see such issue while pulling with the unencrypted docker image niteeshkd/occlum-test:unenc or the image mentioned at ci-setup which is labelled with enc_key & keyid.

So, it seems either skopeo with coco-keyprovider has some issue while creating encrypted image and pushing it to the docker or docker has some issue while pulling an encrypted image. This could be the cause of the entries trying to pull encrypted image multiple times in the agent log.

niteeshkd commented 4 months ago

I tried to put the enc_key and key_id labels through Dockefile to create an encrypted image from occlum dev container and then launch the enclave-cc coco with that image ( as done with the image mentioned at ci-setup ) . I still see the same error although I am able to pull this image manually using docker without any error.

I used the occlum/occlum:0.30.1-ubuntu20.04 to create occlum dev container and create a sample hello_world. I used the following Dockerfile to create the encrypted image.

root@0b70e247146f:~/niteesh/occlum-instance# occlum run /bin/hello_world
Hello SGX-coco World

root@0b70e247146f:~/niteesh/occlum-instance# cat Dockerfile 
FROM scratch
ADD image /
LABEL enc_key="LLLOhvkqFcGMzZrVzt6vPWlj/F/bgYMNe45vhQpdxAA="
LABEL key_id="kbs:///default/image-kek/11111d96-dccd-46a3-9244-93644d76745f"

root@0b70e247146f:~/niteesh/occlum-instance# docker build -f Dockerfile -t "docker.io/niteeshkd/test-image-occlum:enc" .

root@0b70e247146f:~/niteesh/occlum-instance# docker push docker.io/niteeshkd/test-image-occlum:enc
$ kubectl describe pod enclave-cc-pod
...
Events:
  Type     Reason         Age                    From               Message
  ----     ------         ----                   ----               -------
  Normal   Scheduled      4m59s                  default-scheduler  Successfully assigned default/enclave-cc-pod to b77r44u11-node
  Warning  Failed         4m24s (x2 over 4m49s)  kubelet            Failed to pull image "docker.io/niteeshkd/test-image-occlum:enc": rpc error: code = Unavailable desc = error reading from server: EOF
  Warning  InspectFailed  3m30s                  kubelet            Failed to inspect image "docker.io/niteeshkd/test-image-occlum:enc": rpc error: code = Unknown desc = server is not initialized yet
  Warning  Failed         3m30s                  kubelet            Error: ImageInspectError
  Normal   Pulling        3m15s (x4 over 4m57s)  kubelet            Pulling image "docker.io/niteeshkd/test-image-occlum:enc"
  Warning  Failed         3m14s (x4 over 4m49s)  kubelet            Error: ErrImagePull
  Warning  Failed         3m14s (x2 over 3m59s)  kubelet            Failed to pull image "docker.io/niteeshkd/test-image-occlum:enc": rpc error: code = Internal desc = failed to mount "sefs" to "/run/enclave-cc/containers/test-image-occlum_enc/rootfs", with error: EIO: I/O error
  Warning  Failed         2m49s (x5 over 4m44s)  kubelet            Error: ImagePullBackOff
  Normal   BackOff        2m35s (x6 over 4m44s)  kubelet            Back-off pulling image "docker.io/niteeshkd/test-image-occlum:enc"

$ sudo cat /run/containerd/agent-enclave/df6e25b2d3b66f42062fb8444bb30a53d2fbe107713ecb0c1f76a41a81700f57/stderr
[2024-03-13T22:16:58Z INFO  enclave_agent] ttRPC server started: "tcp://127.0.0.1:7788"
[2024-03-13T22:17:14Z INFO  enclave_agent::services::images] Pulling "docker.io/niteeshkd/test-image-occlum:enc"
[2024-03-13T22:17:14Z INFO  image_rs::resource::kbs] secure channel uses native-aa
[2024-03-13T22:17:15Z WARN  kbs_protocol::client::rcar_client] Authenticating with KBS failed. Perform a new RCAR handshake: ErrorInformation {
        error_type: "https://github.com/confidential-containers/kbs/errors/InvalidRequest",
        detail: "The request is invalid: parse Authorization header failed: invalid Header provided",
    }
[2024-03-13T22:17:44Z INFO  enclave_agent::services::images] Pull image "docker.io/niteeshkd/test-image-occlum:enc" successfully
[2024-03-13T22:17:44Z INFO  enclave_agent::services::images] Pulling "docker.io/niteeshkd/test-image-occlum:enc"
[2024-03-13T22:17:44Z INFO  image_rs::resource::kbs] secure channel uses native-aa
[2024-03-13T22:18:28Z INFO  enclave_agent::services::images] Pulling "docker.io/niteeshkd/test-image-occlum:enc"
[2024-03-13T22:18:28Z INFO  image_rs::resource::kbs] secure channel uses native-aa
[2024-03-13T22:20:02Z INFO  enclave_agent::services::images] Pulling "docker.io/niteeshkd/test-image-occlum:enc"
[2024-03-13T22:20:02Z INFO  image_rs::resource::kbs] secure channel uses native-aa
[2024-03-13T22:22:51Z INFO  enclave_agent::services::images] Pulling "docker.io/niteeshkd/test-image-occlum:enc"
[2024-03-13T22:22:51Z INFO  image_rs::resource::kbs] secure channel uses native-aa
[2024-03-13T22:28:00Z INFO  enclave_agent::services::images] Pulling "docker.io/niteeshkd/test-image-occlum:enc"
[2024-03-13T22:28:00Z INFO  image_rs::resource::kbs] secure channel uses native-aa
mythi commented 4 months ago

I've lost track what your setup is. There is no need to "which i built using occlum dev container" since enclave-cc is giving the same result transparently. Anyway, /bin/hello_world is there so this image should be fine.

But, when I use its encrypted image docker.io/niteeshkd/occlum-test:enc2 with attestatation (i.e. "security_validate": true

Note that security_validate is about signature checks and AFAIU, your policy is set to require sigstore signature for it in the registry. If the policy is set as @Xynnn007 suggested, you might see different results.

The other image does not have any encrypted layers:

$ skopeo inspect --raw docker://niteeshkd/test-image-occlum:enc | jq  '.layers[].mediaType'
"application/vnd.docker.image.rootfs.diff.tar.gzip"
niteeshkd commented 4 months ago

@mythi In my setup, I was trying to build an image from occlum dev container and then encrypt it using skopeo & trustee keyprovider and then i was signing it using cosign. It was giving the problem while using encrypted or encrypted+signed image. I did not have any issue while using the image which you suggested. Looking at this working image, i was modifying my dockerfile to embed enc_key and key_id to findout the cause of building image from occlum dev container. That did not help.

Now, after I resetup the occlum environment, rebuilt the app and recreated the image in occlum dev container, everything is working fine. I am able to launch enclave-cc with the signed encrypted image built from occlum dev container.

Here is the log of enclave agent which looks different now.

$ sudo cat /run/containerd/agent-enclave/7b6c43cd35acec6332cb74ff4ccf86ea396731b7e5cc7a4c034b9dae1c33198f/stderr
[2024-03-15T03:23:45Z INFO  enclave_agent] ttRPC server started: "tcp://127.0.0.1:7788"
[2024-03-15T03:23:45Z INFO  enclave_agent::services::images] Pulling "docker.io/niteeshkd/test-occlum:enc"
[2024-03-15T03:23:45Z INFO  image_rs::resource::kbs] secure channel uses native-aa
[2024-03-15T03:23:46Z WARN  kbs_protocol::client::rcar_client] Authenticating with KBS failed. Perform a new RCAR handshake: ErrorInformation {
        error_type: "https://github.com/confidential-containers/kbs/errors/InvalidRequest",
        detail: "The request is invalid: parse Authorization header failed: invalid Header provided",
    }
[2024-03-15T03:23:46Z INFO  sigstore::cosign::client_builder] Rekor public key not provided. Rekor integration disabled
[2024-03-15T03:23:46Z INFO  sigstore::cosign::client_builder] No Fulcio cert has been provided. Fulcio integration disabled
[2024-03-15T03:23:47Z INFO  sigstore::cosign::signature_layers] Ignoring bundle, rekor public key not provided to verification client bundle="{\"SignedEntryTimestamp\":\"MEUCIB/yEmxE45+EwCoCKsX3ksbDSocM+a2wIyzP/wXjCR9sAiEAmAq+vnzuXWpJSbr1PJ2WKmEn+WWf/ZfAP0drBt2zQK4=\",\"Payload\":{\"body\":\"eyJhcGlWZXJzaW9uIjoiMC4wLjEiLCJraW5kIjoiaGFzaGVkcmVrb3JkIiwic3BlYyI6eyJkYXRhIjp7Imhhc2giOnsiYWxnb3JpdGhtIjoic2hhMjU2IiwidmFsdWUiOiI4MDBjN2UzYjNiYzc2MTg2ZDRjNWQ4ZjcxOWM3MTk2ZDVjNGUzZjIyMjJiZWY4ODljOTg1NGE5NTRkZTg1NjFkIn19LCJzaWduYXR1cmUiOnsiY29udGVudCI6Ik1FVUNJSE9WbmlieXhSeWlUVDRkZGorQU5WZ1g5WVAzNTN6STlEVjdhM2hkMFBLQkFpRUFuTVZJQnVVbzBwZExsYk9DRGlJS3JMcUJROG04a3Rja0g1VitTQVFRaSt3PSIsInB1YmxpY0tleSI6eyJjb250ZW50IjoiTFMwdExTMUNSVWRKVGlCUVZVSk1TVU1nUzBWWkxTMHRMUzBLVFVacmQwVjNXVWhMYjFwSmVtb3dRMEZSV1VsTGIxcEplbW93UkVGUlkwUlJaMEZGUmpKcE9GcGxNakZXTmtnd0t6aHZPRTE2Wnk5c1dtdHZZekkwWlFwWWJXNXVUWFF5ZDFBMU9YSkxNMWRGVTI5QlNtWTBhazFHZWpKRmNVVmpkRE1yTkdVMmIxTnZXalJTVGpWa09GcHFlbHBsVTJ4aFZUWjNQVDBLTFMwdExTMUZUa1FnVUZWQ1RFbERJRXRGV1MwdExTMHRDZz09In19fX0=\",\"integratedTime\":1710472940,\"logIndex\":78368684,\"logID\":\"c0d23d6ad406973f9559f3ba2d1ca01f84147d8ffc5b8445c224f98b9591801d\"}}"
[2024-03-15T03:23:47Z WARN  kbs_protocol::client::rcar_client] Authenticating with KBS failed. Perform a new RCAR handshake: ErrorInformation {
        error_type: "https://github.com/confidential-containers/kbs/errors/InvalidRequest",
        detail: "The request is invalid: parse Authorization header failed: invalid Header provided",
    }
[2024-03-15T03:24:17Z INFO  enclave_agent::services::images] Pull image "docker.io/niteeshkd/test-occlum:enc" successfully
$ kubectl describe pod enclave-cc-pod
... 
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  13m   default-scheduler  Successfully assigned default/enclave-cc-pod to b77r44u11-node
  Normal  Pulling    13m   kubelet            Pulling image "docker.io/niteeshkd/test-occlum:enc"
  Normal  Pulled     13m   kubelet            Successfully pulled image "docker.io/niteeshkd/test-occlum:enc" in 44.352s (44.352s including waiting)
  Normal  Created    13m   kubelet            Created container hello-world
  Normal  Started    13m   kubelet            Started container hello-world

$ kubectl logs enclave-cc-pod
Hello SGX coco World!
Hello SGX coco World!
niteeshkd commented 4 months ago

@mythi @Xynnn007 Thank you so much for your helps! Now, we can close both the issues (i.e. this one and issue#301 )

mythi commented 4 months ago

@niteeshkd great that it works for you now! Not sure what was the glitch but with enclave-cc you should not have the need to do any occlum builds for your image. Perhaps the double-occlum setup was the reason...

Anyway, closing.

niteeshkd commented 4 months ago

@mythi @Xynnn007 , Just to be clear, i did some more investigation and found the following.

If the program, as given below, exits immediately , there is no issue in creating enclave-cc with the unencrypted image containing that program. But, enclave-cc can not be created with the encrypted image containing that program.

#include <stdio.h>
int main() {
    printf("Hello World\n");
    return 0;
}

If the program, as given below, does not exit immediately, then enclave-cc can be created using both unencrypted and encrypted image containing that program.

#include <stdio.h>
#include <unistd.h>
int main() {
    while(1) {
        printf("Hello World\n");
        sleep(1);
    }
}