Open rpbarnes opened 1 year ago
Great idea. I'm facing the same limitation.
Adding the cacheFrom
and cacheTo
Construct Props to the DockerImageAsset Construct would be the best way to handle it.
I wonder if this hasn't been done already for the ironic reason that ECR itself doesn't support Cache Manifests. I'd personally love to see this though, makes builds unnecessarily long and/or have do builds outside CDK which somewhat defeats the purpose.
I wonder if this hasn't been done already for the ironic reason that ECR itself doesn't support Cache Manifests. I'd personally love to see this though, makes builds unnecessarily long and/or have do builds outside CDK which somewhat defeats the purpose.
I tried with docker-compose with an ECR registry as cache and it works. It also works with AWS copilot. But I don't find a way with AWS CDK.
I ended up using depot.dev (a paid docker build solution, it's really fast) and docker compose to completely circumvent the cdk docker build process.
If you're interested I can put together a small write up of what I did with some code samples.
In the meantime it's possible to use buildx
with type=local
and either S3 or EFS caching on the target folder(s).
On the note of caching, added https://github.com/aws/aws-cdk/pull/24024 to hopefully at least expose the flags to do so.
@RichiCoder1 did you find a way to have DockerImageAsset
use a container driver rather than the default docker driver?
@RichiCoder1 did you find a way to have
DockerImageAsset
use a container driver rather than the default docker driver?
I believe there's the (undocumented?) flag CDK_DOCKER
which changes the binary it'll use for the build command by default: https://github.com/aws/aws-cdk/blob/main/packages/cdk-assets/lib/private/docker.ts#L261
It must use a docker-compliant CLI API though.
I believe there's the (undocumented?) flag
CDK_DOCKER
which changes the binary it'll use for the build command by default: https://github.com/aws/aws-cdk/blob/main/packages/cdk-assets/lib/private/docker.ts#L261It must use a docker-compliant CLI API though.
I'm using a custom build image for selfMutation, creating & bootstrapping a container driver in the prebuild phase to enable --cache-to
and --cache-from
.
DockerImageAsset
appears to be calling docker build
from the custom build image but doesn't have the container driver loaded – as though it's using the image from before the buildspec commands were run.
I could script CDK_DOCKER
to check for the driver & load if needed but wondering if I've overlooked a simpler approach.
Based on the comment here it looks like support for cache manifests for AWS ECR for --cache-to
is almost here -- does this unblock this one when it is available?
Interested in this one as well
it looks like this functionality will be available in ECR when docker 25 is released (or you can manually update buildkit to 0.12):
This issue has received a significant amount of attention so we are automatically upgrading its priority. A member of the community will see the re-prioritization and provide an update on the issue.
FYI Docker 25 release candidate 1 was released yesterday
Is there an update on this? We should be unblocked by docker 25 release.
I'm also curious if there are any updates on this?
I spent a decent amount of time getting this working for GitHub actions. Check out https://benlimmer.com/2024/04/08/caching-cdk-dockerimageasset-github-actions/ for details.
I also filed https://github.com/aws/aws-cdk/issues/29768, which might be of interest, too.
Describe the feature
Deploying docker images via cdk on CI / CD systems rebuilds the entire docker image from scratch on every deploy. It is a major work around to tell cdk's
DockerImageCode
how to use previously stored images in the cdk's ECR as caches for the next build.The cdk should by default use the docker
--cache-to
and--cache-from
args when building ECR assets so that each image the cdk builds and uploads to ECR is only incremental based on what's already existing in ECR.Use Case
This is useful when cdk is used to deploy environment on build machines that don't have access to a local docker cache.
Proposed Solution
When cdk pushes images to ECR tag image with some permanent reference to the asset, potentially the
resourceId
, such that the image can be referenced on subsequent builds.When cdk builds images look for existing image asset in ECR before building, if asset exists set the
--cache-from
flag to point to image.When cdk builds image set the
--cache-to
flag to point to the image's tag in ECR.This description above will add bloat to images. Another solution could be to save a separate caching image alongside each 'production' image. This way the
--cache-to
and--cache-from
flags would point to the caching image and the production image would get build without any of the caching assets.Other Information
No response
Acknowledgements
CDK version used
2.15.0
Environment details (OS name and version, etc.)
mac osx