Open sphw opened 3 years ago
On further investigation there is actually a further reason the nitro-cli
calls out to the Docker daemon to inspect the Docker image: https://github.com/aws/aws-nitro-enclaves-cli/blob/0e164db23194d9f3bca94c8f5ba42f869fbcdc7c/enclave_build/src/docker.rs#L252
There are a few ways this could be fixed. It might make sense to allow users to simply configure the env vars, and cmd themselves. That would be a useful as a feature in its own right. I am happy to open a PR adding those options to the CLI if there is interest.
Our LinuxKit also has an additional feature which doesn't seem to be merged: https://github.com/linuxkit/linuxkit/pull/3446 and needs to be rebased. @neacsu do you know what is missing for upstreaming the feature? Are there any changes we would need to do in the codebase to support it?
Our LinuxKit also has an additional feature which doesn't seem to be merged: linuxkit/linuxkit#3446 and needs to be rebased. @neacsu do you know what is missing for upstreaming the feature? Are there any changes we would need to do in the codebase to support it?
I ran into that today when I was making these changes. I rebased one of the commits from that PR onto develop: https://github.com/m10io/linuxkit
I also made some preliminary changes here: https://github.com/m10io/aws-nitro-enclaves-cli/tree/cmd-env-params
I can clean those up and open a PR if there is interest
I ran into that today when I was making these changes. I rebased one of the commits from that PR onto develop: https://github.com/m10io/linuxkit
The patch there is actually the first version from the PR, and the one that works with the current codebase from aws-nitro-enclaves-cli
, i.e. the one that takes the prefix as a command option (-prefix
).
@petreeftime The PR hasn't got more traction since I last addressed the required changes. I will try to ping the maintainers. The changes to linuxkit
repo would require some changes in aws-nitro-enclaves-cli
as well: the prefix would have to be passed in the YML file instead of as a command option.
@sphw Thanks for taking an interest in this. The proposal to specify cmd and env separately sounds good. It would definitely be nice to have a way of doing things based only on Docker format and not necessarily on the Docker daemon.
In that sense, we should try to integrate the linuxkit
changes in some way. Right now, the best way to do that would probably be to update the linuxkit blob with our feature patch applied, until we can merge it upstream.
We had a similar problem. We're using nix
for reproducible builds, and Dockerfiles+daemon builds don't play well with the build sandbox (and the resulting images are also not reproducible).
Thankfully, the docker abstraction is not actually used later on in the .eif build, the layers are unpacked and repacked into an initramfs. This means that you can simply produce an initramfs from your docker archive and use that for your .eif builds using the eif_build
command https://github.com/aws/aws-nitro-enclaves-cli/blob/8f6ed740b05225512d86163f8b02292668c4b056/eif_utils/src/bin/eif_build.rs.
Note that the official nitro init assumes a particular initramfs layout which you must follow:
/init
: statically linked init.c from this repo: https://github.com/aws/aws-nitro-enclaves-sdk-bootstrap/blob/746ec5d2713e539b94e651601b5c24ec1247c955/init/init.c/nsm.ko
: the NSM kernel module. You can just copy it from blobs
, but then you must also use the kernel from there./cmd
: a newline-separated list of strings that defines the "entrypoint" of your image/env
: a newline-separated VAR=VALUE list of the environment of /cmd/rootfs
: the actual root fs the nitro init will chroot to. This is where you need to unpack your docker imageWe see a 3 issues in that ticket:
shiplift
library uses docker
by default to build or pull the image to local cache. Actually shiplift
can also work with podman
if you specify DOCKER_HOST
environment variable pointing to podman
docker-compatible REST service: https://github.com/softprops/shiplift/blob/7aa868fc3dc13445bdf3452146d3f7a7799d2da2/src/docker.rs#L111. However it's still not clear if it solves the issue of daemon-less and rootless images most of the people would like to use in their CI. But here is a useful article to try different setups with podman
: https://www.redhat.com/sysadmin/podman-inside-container. Please let us know if that could solve CI problems with nested containers.nitro-cli
. It looks like a fair request feasible to implementlinuxkit
which is used to create initramfs
images for enclave is also pulling container image through a docker
daemon. You can temporarily fix it with update to a newer version where docker
dependency is removed - https://github.com/shtaked/linuxkit/commits/nitro_cli_fixesHello, We are also facing this limitation and like to have the ability to build nitro eif on our CI without the docker daemon (using kaniko for instance). Thanks @shtaked for pushing this issue, will it be prioritized at some point?
We've run into this issue while trying to build a docker image.tar file into an enclave image.
@shtaked
Actually shiplift can also work with podman if you specify DOCKER_HOST environment variable pointing to podman docker-compatible REST service:
This is quite tricky to setup in my experience.
podman system service
inside the container without first disabling seccomp
, e.g. by passing the --security-opt seccomp=unconfined
flag to docker run
. This is a potential security risk, and may not be possible in some trusted environments (e.g. concourse).podman
is difficult to install on amazonlinux, whereas nitro-cli
is difficult to install anywhere except amazonlinux.we don't have ability to use container image tar file as an input for nitro-cli. It looks like a fair request feasible to implement
This would be the ideal solution for us. We currently don't have a good way of automating these builds while there continues to be a dependency on a docker daemon.
Update from our side:
shiplift
library usage and rather pulling the images from OCI repositories ourselveslinuxkit
to a latest version and now it can be used without docker daemonThanks for the updates, sounds great! @shtaked definitely OCI, our CI builds with Kaniko to OCI, but for most cases, OCI seems to be the way to go now. (also docker v1 is deprecated)
@shtaked We are currently using Docker v1 images built using Nix. I'm not 100% sure where Nix's support for OCI images is at. If it doesn't exist yet, I can commit to adding it to nixpkgs
, since the formats are similar it shouldn't be a big lift.
@shtaked hey, do you have any updates on this work? We're still blocked by this issue at the moment. Thanks!
Again would like to point out that the eif images may be built without the involvement of Docker at all. Similar to @sphw we use Nix as well, and have been building images for production use for quite a while now. There's no need for Docker or OCI indirection, you can just use a plain initrd that wraps a folder. See https://github.com/aws/aws-nitro-enclaves-cli/issues/235#issuecomment-1025129694 for details
@exFalso How do you handle the signature part of the eif? This signature is critical for us. Also even if your solution works it's not convenient, we use docker images everywhere to version our builds.
You can just unpack the layers into the rootfs folder if you really want to start from a docker image. That's basically what the aws tooling does as well, but for some reason it jumps through many hoops to do it.
By signatures do you mean the built-in way of signing the images (with PCR8 and whatnot)? eif_build
has the corresponding options:
$ ./result/bin/eif_build --help
Enclave image format builder
Builds an eif file
USAGE:
eif_build [FLAGS] [OPTIONS] --cmdline <String> --kernel <FILE> --output <FILE> --ramdisk <FILE>...
FLAGS:
-h, --help Prints help information
--sha256 Sets algorithm to be used for measuring the image
--sha384 Sets algorithm to be used for measuring the image
--sha512 Sets algorithm to be used for measuring the image
-V, --version Prints version information
OPTIONS:
--cmdline <String> Sets the cmdline
--kernel <FILE> Sets path to a bzImage/Image file for x86_64/aarch64 architecture
--output <FILE> Specify output file path
--private-key <private-key> Specify the path to the private-key
--ramdisk <FILE>... Sets path to a ramdisk file representing a cpio.gz archive
--signing-certificate <signing-certificate> Specify the path to the signing certificate
(Sidenote: I don't quite understand the benefit of this signature scheme btw. Attestation will verify the image hash already, which you can sign out of bands if you really want to. Having it as a built-in would only really make sense if there was specific nitro functionality tied to the signature similar to SGX MRSIGNER, e.g. sealing capabilities tied to the signing key.)
Thanks that's helpful, well the benefit for us of the signature (and then the PCR8 check) is that we trust the environment building the image but we don't trust the rest of the infrastructure. In other words, we want to prevent an operator on the infrastructure to launch a rogue eif.
You can just unpack the layers into the rootfs folder if you really want to start from a docker image. That's basically what the aws tooling does as well, but for some reason it jumps through many hoops to do it.
Yes, the process to build from a docker container is a bit complex and can be simplified for sure, but just using docker
(or podman
) was not a sufficient step, as there was no guarantee that from a given container image you always get the same cpio archive. This is something to bear in mind if this sort of reproducibility is important. If you rely solely on PCR8 for validation, then it's probably not a requirement.
Yeah, reproducibility was precisely why we ended up not using Docker at all and creating the initrd directly. Dockerfiles in particular almost encourage users to create non-reproducible images by downloading non-pinned stuff from the internet.
If you do have e.g. a root/
directory where you created the right structure (so the unpacked layers are under root/rootfs
, you can get quite far by just normalizing timestamps and calling cpio with the right magic flags. For example if you want the cpio result at $out
, you can do
find root -exec touch -h --date=@1 {} +
(cd root && find * .[^.*] -print0 | sort -z | cpio -o -H newc -R +0:+0 --reproducible --null | gzip -n > $out)
which will also normalize the uid/gid.
Hey all,
I'm also running into this limitation - it would be awesome to build EIFs in our CI using the native nitro-cli tooling. Is this still a feature that is being looked into?
Hi all,
We ran into this limitation and we decided to go with @exFalso 's approach above, and decided to build the enclaves 'from scratch' with aws/aws-nitro-enclaves-image-format directly, rather than using the Nitro CLI. This uses Nix instead of Docker, so you don't need the daemon nor other privileged builders, and you do not need Docker images (but you can tweak it to use TARs if you do want that). You get some other benefits, like the possibility of using your own Kernel or init process (with the Nitro CLI, hard-to-reproduce binaries are used instead)
We made these efforts open-source at monzo/aws-nitro-util. We are using this to build enclaves in production.
Hi @Cottand,
Congratulations on your new project initiative! We're excited to see the added value that aws-nitro-util brings for Nitro Enclaves users.
Nowadays, @foersleo has been hard at work enabling reproducible builds on aws-nitro-enclaves-sdk-bootstrap. Once this fantastic effort is complete, all the blobs distributed via aws-nitro-enclaves-cli-devel package will be reproducible and verifiable.
What
Right now, you need a running Docker daemon to build an enclave image. The LinuxKit version currently included attempts to pull the image using the Docker daemon. https://github.com/linuxkit/linuxkit/pull/3573 now lets LinuxKit pull the images directly without the need for a Docker daemon.
Why
As part of our enclave support, we (M10 Networks) want to be able to build enclave images entirely in a Dockerfile. We distribute all of our services through Docker images, and all of the builds are entirely performed inside of Dockerfiles. While possible to change this for Nitro, I don't think it should be necessary with a simple change.
How
To supply this functionality, more or less out of the box, all that would be required would be to update the included LinuxKit to one of the latest builds. This would then allow users to use skopeo, or a similar tool, to transfer their local image into LinuxKit's cache located at
~/.linuxkit/cache
themselves.A more user-friendly follow on would be to allow the users to simply pass in a path to their own Docker image archive. At that point, the CLI would need to copy the image into the LinuxKit cache.
As a temporary workaround for our use case, I can simply replace the LinuxKit binary in
/usr/share/nitro_enclaves/blobs
with my own. I think this use case is common enough that it should either be supported through easier means or documented in some way