Open commiterate opened 2 months ago
Hi @commiterate , IMO , it would be much more helpful if supported thru cdk.json. I will discuss this with team.
Marking it as P2 would mean it would not be immediately addressed by the team, however if you would like this to be considered, please help prioritising it by upvotes.
Thanks for requesting this and sharing your research about earlier existing issues as well.
More reasons for having this option:
Currently the aws-cdk.lambda-python-alpha package uses a container to package up a layer. However, it passes in parameters that are not needed with podman (due to it running as the user).
Thus the CLI that is generated when prepping a layer looks like this.
docker run --rm \
-u "501:20" \
-v "/Users/urkle/Projects/Netfor/CDK/netfor-cdk/smp_lambdas/src/layer_google:/asset-input:delegated" \
-v "/Users/urkle/Projects/Netfor/CDK/netfor-cdk/cdk.out/asset.2b2e755e90bc7b0c887c1688b8b6d93ed65c286ed49bca0f6425b999735a9b09:/asset-output:delegated" \
-w "/asset-input" cdk-de18f6cee13cbafcdf0bad83fb66063b89b266a182dd361054851531a4bed9a0 \
bash -c "rsync -rLv /asset-input/ /asset-output/python && cd /asset-output/python && python -m pip install -r requirements.txt -t /asset-output/python"
However, the user mapping -u 501:20
causes issues with podman and causes permission denied errors for the script.
Removing that mapping allows the script to work.. Further when running in that mode the output folder ( asset.2b2e755e90bc7b0c887c1688b8b6d93ed65c286ed49bca0f6425b999735a9b09
) does not need to be world writable (777).
Describe the feature
The undocumented (https://github.com/aws/aws-cdk/issues/25657)
CDK_DOCKER
environment variable introduced by https://github.com/aws/aws-cdk/pull/23855 lets users specify the container runtime to use for Docker/OCI container image asset functionality in the AWS CDK.To make this portable, however, users need to rely on other environment management tools that support project-specific configurations which can be committed to source control such as Nix (
flake.nix
orshell.nix
), direnv (.envrc
), dotenv (.env
), or development containers.Instead of needing to set up these others tools, users should be able to set this option in their
cdk.json
which they are already committing to source control.Use Case
Some CI/CD pipelines use Docker/OCI container based environments where users specify a container image (contains a Linux userspace) to use as the build environment (e.g. AWS CodeBuild steps in AWS CodePipelines, GitLab Pipelines with runners using the Docker executor). That is, build jobs run inside a container.
Since the build job itself might need to use a container engine to build containers or run containerized services within the build environment for testing, users end up doing Docker-in-Docker (DinD).
DinD, however, has a lot of sharp edges. Some of these are related to the Docker daemon requiring escalated privileges.
Unless the CI/CD pipeline has a workaround like exposing the underlying host's Docker daemon Unix socket with a bind mount or running the build job in a privileged container (both are security vulnerabilities), users need to use an alternative container runtime like Podman which doesn't rely on a privileged daemon.
To make it easier to configure the AWS CDK to use an alternative container runtime in both local development and CI/CD environments, users should be able to set this option in their
cdk.json
. This file is already committed to source control for AWS CDK projects and is pulled into CI/CD environments as a result.Acknowledgements
CDK version used
2.155.0
Environment details (OS name and version, etc.)
macOS 14 (Sonoma)