confidential-containers / guest-components

Confidential Containers Guest Tools and Components
Apache License 2.0
71 stars 76 forks source link

[RFC] image-rs devel plan #2

Open arronwy opened 2 years ago

arronwy commented 2 years ago

As the design and architecture PR described, the figure below shows the proposed module blocks: image

The development work of image-rs can be separated as four steps:

Step1: Basic features

In step1, image-rs will implement the basic features and remove the dependency of umoci in current CC rustified image download stack. image-rs can leverage oci-distribution to pull container image, but we also need fill related gaps like manifest list support, public auth, pull_layer API to avoid blocking flowing decryption/decompression operations.

Step2: Performance enhancement

In this step, image-rs will support zstd decompression and for CPU-bound operations like decrypt/decompression to choose the right parallel and instruction set based acceleration libraries. If these libraries don't have rust version, a rust binding version will be created as a seperate crate which can be used by other projects.

Step3: Advanced features

Develop a snapshotter to support container image on demand pull/decrypt, for image layer caching or sharing, it is TBD depends on the security model.

Step4: Full Features (optional)

Depends on the requirement of current CC solution, metadata databased will be implemented to support other CRI image service based API.

jiangliu commented 2 years ago

What's the plan to integrate with AA? And which step will support decryption/verification?

bpradipt commented 2 years ago

What's the plan to integrate with AA? And which step will support decryption/verification?

afaik that should be part of ocicrypt-rs

fitzthum commented 2 years ago

What's the plan to integrate with AA? And which step will support decryption/verification?

afaik that should be part of ocicrypt-rs

Yeah the plan says that image verification will be added to ocicrypt-rs. Are we going to try to use the existing keyprovider interface to do this or will we need to extent the AA to provide public keys?

cc: @jialez0

jialez0 commented 2 years ago

I once opened an issue in AA's repository to explain the scheme that AA supports container image signature verification. After rethinking and some modifications, I think the signature verification process with AA's participation should be as follows: Signature is downloaded together with the image in the phase of Image Pulling, and then ocicrypt-rs needs to obtain the public key and policy.json file from AA to verify the signature.

@fitzthum @arronwy Do you think the above plan is feasible?

fitzthum commented 2 years ago

I think it makes sense to add a public key endpoint to the AA. We might be able to get away with using the existing endpoint and some specially crafted annotation, but it is probably better to have a specific one.

Does the policy.json file need to be supplied from the guest owner or can it be created dynamically inside the guest?

Also, have we decided what verification standard we are going to support. As I understand there are a few different ways to sign containers.

cc: @stevenhorsman

arronwy commented 2 years ago

I once opened an issue in AA's repository to explain the scheme that AA supports container image signature verification. After rethinking and some modifications, I think the signature verification process with AA's participation should be as follows: Signature is downloaded together with the image in the phase of Image Pulling, and then ocicrypt-rs needs to obtain the public key and policy.json file from AA to verify the signature.

Agree, signature can ensure image level data integrity which can be part of container image, public key and policy need get from a trust party which is better to get through attestation service and our runtime to enforce these policies. One question is whether we can make policy.json also part of container image, then we can reduce the maintaince work for remote attestation service. @fitzthum @stevenhorsman @jialez0

jialez0 commented 2 years ago

@arronwy It may not be a good idea to bind the policy.json file to the container image. The policy file is used to specify the policy to be taken when deciding whether to accept the container image or when a single signature of the container image is valid. Finally, only container images accepted by policy requirements are authorized to pull and run. Therefore, logically, policy files should not be bound to a single container image.

In addition, in the confidential container image signature system, since the public key is obtained from the guest owner through the attestation when performing signature verification, the policy file may not be required? @fitzthum @stevenhorsman

stevenhorsman commented 2 years ago

This is an interesting line of questioning and one that I'm not sure I have a good answer for at the moment. With respect to the Red Hat container image signing there are 3 files that we need to somehow get into the system:

We then have a question of whether we also want to support DCT which works differently, but I know less information about this.

I'm not sure if any of this is new information and I'm far from an expert, but I think it's the conclusion that I came to when I did the initial implementation and investigation using skopeo. I hope it was of some help!

fitzthum commented 2 years ago

Yeah I think that ideally the AA won't have to know anything about the policy. Then we could just add an endpoint to the AA for getting a public key (which we could potentially even reuse for other things that need public keys later).

jialez0 commented 2 years ago

According to my understanding, our current conclusion may be as follows:

This may be the most concise and clear solution at present, but a key problem we face is, where does ocicrypt-rs get the signature? According to the existing simple signing scheme, the location of signature must be specified in the local configuration and cannot be placed in the registry together with the container image, which means that the owner of the container must configure the kata boot image to indicate the storage location of signature (in the local storage or on a special server). But if we still want signature to be a part of container image, maybe we finally need to modify the container image format standard? @jiazhang0 @jiangliu What do you think?

jiazhang0 commented 2 years ago

@fitzthum @arronwy @stevenhorsman @jialez0

I would rather add a new and more general API endpoint to satisfy all potential requirements that may need to obtain certain resources through a secure and attended channel established by AA. This new endpoint can accept a request in JSON format describing any necessary asset items. In order to satisfy a new requirement, we can simply extend the JSON request format instead of adding new endpoint. In KBS side, it just simply manage the resources with a set of mappings, e.g, "resource ID" - > "one or a group of resources (e.g, public key, private key, signature, policy.json, and etc)".

Note: this new API endpoint is not related to the key provider protocol. Instead we define it to extend the ability of AA. Any callers with or without cryptography context can use this API to obtain data with the attributes of confidentiality and / or integrity. We don't need to limit the extension ability to the key provider protocol only. Instead, the key provider protocol is just worked as a service interface supported by AA for the single scenario of image decryption.

I don't agree on the dynamic generation mechanism for policy.json, because policy.json provides the ability of explicit authorization configured by user, which can determine what kind of image is allowed to be pulled, not just a mechanism of signature authentication. Obviously, the dynamic policy.json loses this ability. The fundamental reason is that policy.json is not a carrier of public key and metadata info as seen. It is more like an explicit authorization mechanism defined by the user, and how to implement this authorization mechanism (e.g, by configuring the specified public key and other metadata). What I mean is defining a policy.json is an approach from top to bottom, but the dynamic policy.json is more like an approach from down to top.

The following is an scenario illustrating that independent policy.json provisioning will be a better choice: Assuming an user signs the same image with two different private keys, a primary key and a secondary key, and uploads the images to two different registries (also due to taking a primary-backup scheme). One day, the primary key was leaked, so the deployed policy.json file needs to be updated to remove the information related to the primary key. If using dynamic policy.json, how can we update the deployed policy.json or only remove the information related to the primary key in policy.json? Similarly, if the user decides to stop using the primary registry and switch to the secondary registry, how can we update the deployed policy.json?

Coincidentally, if we adopt a more general endpoint method, e.g, describe and obtain policy.json as a resource, we can solve the above problem easier.

stevenhorsman commented 2 years ago

I don't agree on the dynamic generation mechanism for policy.json, because policy.json provides the ability of explicit authorization configured by user, which can determine what kind of image is allowed to be pulled, not just a mechanism of signature authentication. Obviously, the dynamic policy.json loses this ability. The fundamental reason is that policy.json is not a carrier of public key and metadata info as seen. It is more like an explicit authorization mechanism defined by the user, and how to implement this authorization mechanism (e.g, by configuring the specified public key and other metadata). What I mean is defining a policy.json is an approach from top to bottom, but the dynamic policy.json is more like an approach from down to top.

I agree - sorry, I wasn't clear above. I definitely agree that ideally we want a policy.json to be pre-configured by the workload owner and 'we' get this from the general purpose endpoint. I was suggesting that the dynamic, generated policy.json approach could be a stepping stone approach whilst we don't have that general API, but if we are focussing on the end goal here, it's not what we want to aim for.

I think the general purpose (get trusted configuration/secrets) API which could serve the policy.json would also be the best way to get the signature rather than changing the container image format, though obviously we'll need to think about the user experience (if others haven't already) and how this stuff gets defined and passed in to ensure it's not too difficult to use in practice.

fitzthum commented 2 years ago

I am on board with @jiazhang0's suggestion above although I am somewhat wary of broadening the scope of the AA too much and we might run into size constraints trying to inject so much stuff with SEV(-ES).

I want to raise a fairly serious related issue. I may create a separate issue for this in the AA. The question is how can we trust the validity of a public key without verifying the identity of the KBS. Let me explain, currently we provision secrets via the AA by setting up a secure channel between the KBC and KBS. As far as I am aware, the KBC does not do anything to validate the public key/identity of the KBS. Instead, we assume that we must have the correct KBS if we are able to decrypt the image. To put it another way, we aren't worried about the CSP rerouting our communication with the KBS to a malicious KBS because the malicious KBS wouldn't have the correct keys and the containers wouldn't be decrypted.

Unfortunately this assumption does not hold for signatures. If a client launches a signed but unencrypted container, what stops the CSP tampering with the container image and then rerouting traffic to a KBS that will inject a public key that validates the manipulated image?

It seems like if we want to support signed but unencrypted images we may need to add explicit verification of the KBS to the KBC. Otherwise, how do we know that we are getting the correct signature?

jiazhang0 commented 2 years ago

Unfortunately this assumption does not hold for signatures. If a client launches a signed but unencrypted container, what stops the CSP tampering with the container image and then rerouting traffic to a KBS that will inject a public key that validates the manipulated image?

It seems like if we want to support signed but unencrypted images we may need to add explicit verification of the KBS to the KBC. Otherwise, how do we know that we are getting the correct signature?

By the way, image signature verification should happen prior to the decryption during unpacking image, so the problem you mentioned about the explicit verification of the KBS to the KBC is also required for the use cases of using image encryption scheme.

About the explicit verification of the KBS to the KBC, I think it is required for sure in any way for CCv1. KBS acts as a TLS server which can be authenticated by a well-known public intermediate or root CA. Of course CSP can reroute the communications to a malicious KBS with a valid certificate to compromise the security, in the manner of sacrificing CSP's reputation.

Sorry I would give objection again to the support of unencrypted container since CCv1, because image encryption acts as a gatekeeper to enforce a launch of attestation verification procedure to occur in order to prove the unverified TEE/PoD is trustworthy and launch a secret provisioning. It is required for sure in any way for CCv1, and your scenario proves this fact just right.

In summary:

jiazhang0 commented 2 years ago

I don't agree on the dynamic generation mechanism for policy.json, because policy.json provides the ability of explicit authorization configured by user, which can determine what kind of image is allowed to be pulled, not just a mechanism of signature authentication. Obviously, the dynamic policy.json loses this ability. The fundamental reason is that policy.json is not a carrier of public key and metadata info as seen. It is more like an explicit authorization mechanism defined by the user, and how to implement this authorization mechanism (e.g, by configuring the specified public key and other metadata). What I mean is defining a policy.json is an approach from top to bottom, but the dynamic policy.json is more like an approach from down to top.

I agree - sorry, I wasn't clear above. I definitely agree that ideally we want a policy.json to be pre-configured by the workload owner and 'we' get this from the general purpose endpoint. I was suggesting that the dynamic, generated policy.json approach could be a stepping stone approach whilst we don't have that general API, but if we are focussing on the end goal here, it's not what we want to aim for.

I think the general purpose (get trusted configuration/secrets) API which could serve the policy.json would also be the best way to get the signature rather than changing the container image format, though obviously we'll need to think about the user experience (if others haven't already) and how this stuff gets defined and passed in to ensure it's not too difficult to use in practice.

OK I think we are in the same page now. Actually, my thoughts can be summed up in one sentence: use the attested channel established by AA to replace untrusted channels, as needed.

optimistyzy commented 2 years ago

@jiangliu For "snapshot module support overlay2", so it means that for one image, there will be multiple layers, and each layer should support encryption/description. So shall we support different keys for different layers? Then it will be more flexible to support the image layer sharing.

magowan commented 2 years ago

For me the Use case for unencrypted but signed containers is quite simple and does not break any trust model. Essentially I do not agree with the point "Container image protections must include confidentiality" Integrity gives us trust, confidentiality gives us secrecy....?

Sidecars are common practice in kubernetes and injection of such into a pod by webhooks or other means is also I believe quite common (istio, logging etc). For me our goal should be to allow the same behaviour for confidential containers as we do for containers today. If I am happy for such a sidecar image to be used why do I need to go out there and encrypt it and then change the cluster config to provide the encrypted version for use with confidential containers and a unencrypted version for all non confidential pods? Using signatures provides a simple mechanism to preserve trust for confidential use case and change very little to nothing about the wider cluster configuration, webhooks sidecars etc?

jiazhang0 commented 2 years ago

For me the Use case for unencrypted but signed containers is quite simple and does not break any trust model. Essentially I do not agree with the point "Container image protections must include confidentiality" Integrity gives us trust, confidentiality gives us secrecy....?

The container image goes across the trust boundary from untrusted channel to TEE, so we need to at least ensure its integrity. The statement you quoted has a pre-condition related to our current implementation: "image encryption acts as a gatekeeper to enforce a launch of attestation verification procedure to occur in order to prove the unverified TEE/PoD is trustworthy and launch a secret provisioning." In current implementation, image encryption becomes an indispensable mechanism, not an optional protection approach, so it is true to provide both confidentiality and integrity.

From a general perspective, allowing to launch an unencrypted but signed container is reasonable, but it is not simple. As the last statement commented by Tobin: "It seems like if we want to support signed but unencrypted images we may need to add explicit verification of the KBS to the KBC." This is what we are currently trying to address. If solved, for example, designing a procedure of signature verification which can also trigger an enforcement to the attestation verification procedure plus explicit certification authentication, starting an unencrypted but signed containers would be appropriate.

magowan commented 2 years ago

So I guess if we separate the Use Case from how we satisfy it then I am more comfortable. I see a Use Case which I feel is important and I'm happy to have a debate on Uses Case later :-) I see there is a challenge with our current attestation flow in delivering material from a KBS in a trusted way. But the challenge in delivering the material does not in itself mean the Use Case is invalid. If I wish to create my own boot image which will have signature material already inside it and therefore be part of attestation measurement then I don't need to worry about KBS for this. At this point I have removed the KBS trust problem and simply need the image-rs capability to use this material to verify signature. (Risks a whole side discussion but key point is I feel the capability required within image-rs should not be determined by the current capabilities of attestation? We seem to be at risk of tieing image-rs to all aspects of attestation. I am however onboard with priority being influenced in this way but it felt stronger than priority in some of the comments.

( I really appreciate that second languages and written text is a factor and respect other peoples ability to communicate very well in second languages infinitely better than my own ability!)

I guess I may be missing something though. "we aren't worried about the CSP rerouting our communication with the KBS to a malicious KBS because the malicious KBS wouldn't have the correct keys and the containers wouldn't be decrypted." Why does this not worry us? especially with respect to a man in the middle attack? If we don't authenticate the KBS then a man in middle attack can get access to key material supplement real response with additional keys to allow malicious sidecar containers to be added to a pod? Encrypted containers won't solve this problem?

fitzthum commented 2 years ago

Why does this not worry us? especially with respect to a man in the middle attack

As you suggest, a CSP can intercept the communication with the KBS and with the registry. That said, a CSP can always create a new pod and startup whatever containers they want inside of it. The key is that these won't be the customer's containers because the CSP doesn't have the keys to the customer's containers. In short, the MitM attack can't compromise confidentiality.

If we don't authenticate the KBS then a man in middle attack can get access to key material supplement real response with additional keys to allow malicious sidecar containers to be added to a pod? Encrypted containers won't solve this problem?

The communication between KBC and KBS takes place on a secure channel so the CSP won't be able to access any key material or add any additional keys. The CSP could, however, setup the secure channel with their own KBS rather than the clients. The point is that there won't be any way to mix and match genuine encrypted layers and malicious encrypted layers in the same pod.

Now there is a potential attack where the CSP does not beak confidentiality, but substitutes a client's workload for a malicious workload that looks the same but does something bad when the client tries to use it. Ideally this is where signatures come in, although you can get similar guarantees with properly designed encrypted images. We probably want to support signed images. This is likely to be a very prominent use case. Unfortunately our initial plan for signatures does not seem to be secure. image-rs will support signatures, but how to make it secure is something we gotta figure out soon (there are a handful of solutions) or we may have to adjust our expectations.

I'm not sure if SE suffers from the signature validation issue. Given the unconditional trust of the ultravisor, we can probably be sure that a public key included in the FS is legit although I don't really know how the ultravisor gets those keys from the client and there could be issues there. (@Jakob-Naucke) I don't really view encrypting the initrd as a solution here, though, because we don't really support it on any other platforms. That said, putting the public key of the KBS in the agent config file (which is measured) could be a part of the solution.

Jakob-Naucke commented 2 years ago

I'm not really sure the signature validation issue is architecture specific.

So first, I don't think a pod that runs just a signed, unencrypted container is a sensible use case. Like, a guest owner has an interest in running a confidential pod because it is going to process some confidential data, and when that data isn't inside the container image, it has to get there via transport security (or encrypted data at rest, but I guess we're keeping that can of worms closed for a bit still). For transport security, the container image must contain a key that the guest owner knows, and since that key must not be readable to the CSP, the image already has to be encrypted. (Obviously, for asymmetric encryption, the guest owner must know the public key, the CSP must not be able to read the private key.)

With that said, I do get the sidecar use case. We would not want malicious sidecars in confidential containers running some guest owner workload, and to avoid having to encrypt those too, we sign them. Okay. However, sidecars could only join guest owner confidential pods that are already sure to have the correct decryption key, and thus, the proper keys for signature validation.

In short, having just an unencrypted image in a confidential pod isn't really useful AFAICT. This logic applies to all architectures.

fitzthum commented 2 years ago

I think this is a reasonable perspective. It could be fine for us to not really support pods with only signed but not encrypted containers. Let's make sure we communicate this well, though. The average user would probably expect that a signed container will always be verified and it is not obvious that this isn't true.

On the other hand, there are ways to make a more foolproof signature verification scheme. Basically we would need t extend the KBC to verify the identity of the KBS.

magowan commented 2 years ago

I wasn't thinking of a specific technology, the keys we are talking about for signatures are public so it does not need to be within an encrypted initrd. It is fine to be in the trusted boot image as long as it is measured.

I completely agree that if any secrets are held within the container image it needs to be encrypted, we can in the future as you say explore secrets for transport/provision of data which may (or may not) provide an alternative path for confidence in the guest.

However I do see a case for just signature checking for all container images running in a pod. In this case there is nothing secret within any of my container images, in this case it is all about protecting the data, not the containers. Now as you point out @Jakob-Naucke this means there would need to be a secret that protects the data from actors outside the TEE and this would be provided as a result of attestation . If an invalid key is provided then the data cannot be "unlocked" and so although the containers can be pulled, unpacked and started they cannot access the data unless they receive the appropriate secret.

This supports considering different trust scenarios - what we are trying to protect data or code? And feeds back to my direction of thinking, what is it that must not be visible outside the TEE and that is what must be encrypted and attestation is the flow to provide that key. This is different to what must I guarantee integrity of before I accept it into the TEE.

And of course as has been pointed out we must have trust in the data/signature we use to check integrity of anything. If it has been delivered alongside some secrets that are essential for the pod to actually function then we are implciitly trusting the delivery of signatures.

magowan commented 2 years ago

I still haven't got it clear in my head why there isn't a problem with the man in middle attack in general. Happy to be educated on what I am missing.

The communication between KBC and KBS takes place on a secure channel so the CSP won't be able to access any key material or add any additional keys. The CSP could, however, setup the secure channel with their own KBS rather than the clients. The point is that there won't be any way to mix and match genuine encrypted layers and malicious encrypted layers in the same pod.

Why can my man in the middle not pretend to be a KBS on one side and then be a KBC to the real KBS on the other side? So instead of KBC <-> KBS I have KBC <-> MiM_KBS | MiM_KBC <-> KBS I then inject a malicious container into the pod. when the KBC <-> KBS flow starts relating to my malicious container my man in the middle simply responds with the required key, for all other requests it simply reforms the request and passes it onto the real KBs and passes the response back.

Now I have successfuly injected my malicious container, I can break out of the malicious container and have access to everything, all secrets data etc relating to the real containers?

Using encrypted containers hasn't prevented this, only the KBC confirming the identity of the KBS can prevent it, so the problem of KBC needing to confirm identity of KBS at other end of secure channel is not specific to signed containers?

jiazhang0 commented 2 years ago

Why does this not worry us? especially with respect to a man in the middle attack

As you suggest, a CSP can intercept the communication with the KBS and with the registry. That said, a CSP can always create a new pod and startup whatever containers they want inside of it. The key is that these won't be the customer's containers because the CSP doesn't have the keys to the customer's containers. In short, the MitM attack can't compromise confidentiality.

Let's consider it more. Assuming a protected boot image pre-provisionned with some sensitive data is used, the malicious workload from tampered container image can leak sensitive data stored in the protected boot image.

jiazhang0 commented 2 years ago

On the other hand, there are ways to make a more foolproof signature verification scheme. Basically we would need t extend the KBC to verify the identity of the KBS.

I think the authentication to KBS is not complex. In current PKI model and typical https use cases, the TLS server certificate used by KBS binds to its domain name, and HTTPS server (assuming KBS uses https protocol) usually checks the consistence between the domain name info recorded in TLS server certificate and the requesting domain name from KBC. In other words, MitM attack cannot be easily archived (unless CA makes mistake).

Note that KBS may use a TLS server certificate signed by private CA. In this case, the pubkey/certificate used to authenticate KBS needs be pre-provisioned to the protected boot image.

fitzthum commented 2 years ago

Why can my man in the middle not pretend to be a KBS on one side and then be a KBC to the real KBS on the other side? So instead of KBC <-> KBS I have KBC <-> MiM_KBS | MiM_KBC <-> KBS I then inject a malicious container into the pod. when the KBC <-> KBS flow starts relating to my malicious container my man in the middle simply responds with the required key, for all other requests it simply reforms the request and passes it onto the real KBs and passes the response back.

I think the short answer is that we use public key crypto to setup the secure channel and the public key of the hardware is validated. Let me elaborate. This will be specific to SEV, but I think TDX has very similar ideas.

For SEV(-ES) we setup a secure channel between the PSP (the hardware module that enforces stuff) and the guest owner or something standing in for the guest owner (like the KBS). To setup the secure channel both the PSP and the KBS provide their public keys. Using diffie-hellman these are used to generate one shared key. This shared key can be derived by each party but not by anyone else. Even someone who intercepts both public keys won't be able to derive the shared secret. This ensures that we have a secure connection between the two parties. The shared secret is essentially used to encrypt and verify every message that goes between the parties (I am skipping a step here, but it's fine).

That said, you are suggesting that there is someone in the middle who goes through this process to connect with the PSP and then goes through the same process again to connect to the KBS (this time impersonating the PSP). The key here is that the public key that the PSP provides (known as the Platform Diffie-Hellman Key or PDH) can be validated and linked to the PSP. For SEV(-ES) there is a somewhat complicated chain of keys that are used to sign the PDH. You can find more details in this doc section 2.1. Essentially, the PDH is signed by a key that proves that the key belongs to a valid AMD machine and a key that proves that the machine belongs to the CSP that it is supposed to. The KBS can validate both of these signatures. A MitM is not going to be able to come up with a key that passes these tests*. Thus, the real KBS isn't going to be tricked into connecting with the MitM.

The MitM could forward along the genuine PDH, but then it wouldn't be able to derive the shared key (because the MitM doesn't have the PSP's private key). Without the shared key the MitM can't read or modify any of the messages.

*You might ask whether an attacker that has a valid AMD machine (perhaps like a rogue CSP) could generate a valid PDH and use that to launch a MitM attack. Not really. I won't go into all the details, but basically this is equivalent to starting a valid SEV VM on a different node, which does not break confidentiality as long as the measurement checks out.

fitzthum commented 2 years ago

We talked about the signature validation issue at length in the meeting today. To briefly summarize

magowan commented 2 years ago

The KBS can validate both of these signatures. A MitM is not going to be able to come up with a key that passes these tests*. Thus, the real KBS isn't going to be tricked into connecting with the MitM.

Thanks, thats the piece missing in my head esentially the "real" KBS can verify the client is a valid AMD machine And as you say attempts to do this on a valid AMD machine start to lead us to attestation measurements which wouldn't match and if they did we don;t care at that point anyway... etc etc

jiazhang0 commented 2 years ago

We talked about the signature validation issue at length in the meeting today. To briefly summarize

* People seem fine with not supporting signature validation in pods with no encryption, although we need to communicate this very clearly or even explicitly prevent it.

* `image-rs` still needs support for signatures

* We need to think about how exactly we enforce policies that require a pod-level view. For example, `image-rs` can't really confirm on its own that every container meets a certain criterion or even that at least one does.

I think we can move this discussion to AA.

wainersm commented 2 years ago

Hi @arronwy !

@stevenhorsman put a plan in place to merge CCv0 into kata's main branch (see https://github.com/kata-containers/tests/issues/4441).

I was wondering how far from "step 1" in your plan we are now. Once https://github.com/confidential-containers/image-rs/pull/5 is merged it seems we will be able to drop skopeo. Am I right? How about umoci replacement?

I was also thinking that once read for the CCv0 into main merge, we should have a release of image-rs (version 0.1? 1.0?) so that we start building the kata-agent from a pinned image-rs version. What do you think?

arronwy commented 2 years ago

Hi @arronwy !

@stevenhorsman put a plan in place to merge CCv0 into kata's main branch (see kata-containers/tests#4441).

I was wondering how far from "step 1" in your plan we are now. Once confidential-containers/attestation-agent#5 is merged it seems we will be able to drop skopeo. Am I right? How about umoci replacement?

Yes, the unpack and mount features provided by umoci is implemented in image-rs now, we can drop skopeo after confidential-containers/attestation-agent#5 is merged.

I was also thinking that once read for the CCv0 into main merge, we should have a release of image-rs (version 0.1? 1.0?) so that we start building the kata-agent from a pinned image-rs version. What do you think?

Yes, after we verified image-rs can pass all kata CI for skopeo + umoci, we can have a release of image-rs

ariel-adam commented 1 year ago

@arronwy is this issue still relevant or can be closed? If it's still relevant to what release do you think we should map it to (mid-November, end-December, mid-February etc...)?