pulumi / pulumi-docker

A Docker Pulumi resource package, providing multi-language access to Docker resources and building images.
84 stars 14 forks source link

Provide a way to 'push' a local docker image to a remote registry without requiring a 'build' #54

Closed CyrusNajmabadi closed 1 year ago

CyrusNajmabadi commented 5 years ago

Request based on a conversation with @tvalenta. LM would like a way to pull an image down from a private docker repo, but then push that same image up to an ECR repository.

Currently, they could use [RemoteImage] for the 'pull' step. But there is no way for them to push to a particular repository they create (without building). The request is to break up "buildAndPushImageAsync" so you can also just call into the sub functions like "pushImageAsync" (and maybe "tagImageAsync").

Note: the important bit here is "without building". The build step takes a long time for LM, and it's redundant given that htey just pulled the image and don't do anything that woudl cause the build to produce anything different. So all they really want to say is "push this image that i know was pulled successfully".

CyrusNajmabadi commented 5 years ago

Tagging @swgillespie as well to see if he has any ideas on how someone might be able to explicitly push without needing to perform a build step.

joeduffy commented 5 years ago

Another way to package might be something like

const img = new docker.Image(..., {
    pull: "gcr.io/foo/bar",
    push: "ecr.amazonaws.com/baz/buz",
});

I think we've been trying to push folks towards the resource model versus functions, although ultimately of course they would bottom out on those functions that you've noted.

CyrusNajmabadi commented 5 years ago

I'm on the fence about this as we're basically just wrapping verbs behind a resource here. It's unclear to me if that's actually better, versus just having a specific set of ops that can be used programatically by a pulumi app. I like a Resoure that either represents a real cloud resource, or an aggregated concept of several resource.

So, for example, it would make total sense ot have a Resource that represented some image that lived in multiple repos. however, having a resource just be a way to expose pull/push doesn't really make much sense to me, because now the resource represents an imperative series of actions, versus logical data. I'm not show how i feel about flow-as-resource. It seems to take away from the pulumi model where logic and whatnot is actually just with code.

I'll have to noodle on this more to see what feels right!

joeduffy commented 5 years ago

The image is being managed like a resource, in that, if it doesn't change, no operations need to be performed, the state is captured in our resource model so a record of every operation is captured in an auditable and easily viewable way in our service, a full history is there, etc, etc.

Further, although the image itself doesn't (yet) have child resources, we've already seen lots of cases where they are consumed as children of other resources (service objects and whatnot).

So, IMHO, it's clear to me that we want to keep pushing down the route of modeling these as resources.

tvalenta commented 5 years ago

My use-case involves @pulumi/cloud and I'm never directly invoking the docker module. Something like this would be useful:

import * as cloud from "@pulumi/cloud";

let nginx = new cloud.Service("serviceFoo", {
    copyimage: `foo/blah:${versionTag}`,
    ports: [{ port: 80 }],
    replicas: 2,
});

The intent would be that, instead of defining an ECS Fargate task that runs from "foo/blah:versionTag", the image would be pulled down to the host running the Pulumi cli, and then pushed up to the ECS environment and ends up in the same repo, regardless of the versionTag that's being deployed.

If the particular image hasn't changed, then no further operations on the child resources would be ideal. Pulling the image to the build/deployment server only if required would be even more fantastic. I believe this could be done with docker manifest inspect, but that's an experimental CLI feature that isn't readily available for production use; hopefully I'm wrong but all other tests seem to point to "Download the image and then query its ID".

CyrusNajmabadi commented 5 years ago

@tvalenta I thnk that's really overloading what a Service is. I would much rather have a separate step/resource that represents the bridge between your repos and conveys the idea that you want to copy an image from one place to another. your Service then just points at the destination image. In other words, Service shouldn't be the superset of all possible things people want to do with images. It just just represent a service that executes based off of some image. How you get/manipulate the image would be better down with other steps/resources (IMO of course).

lukehoban commented 5 years ago

We could also use the fact that we now support pulling images to support something like this - which would be very close to what we support already today:

let localImage = new docker.RemoteImage("foo", {
    name: "lukehoban/foobar",
});

let targetImage = new docker.Image("doo", {
    localImageName: locaImage.name,
    imageName: "pulumi/foobar",
});
XBeg9 commented 4 years ago

@lukehoban your solution doesn't work, build parameter is required for docker.Image

Aaronontheweb commented 4 years ago

We could also use the fact that we now support pulling images to support something like this - which would be very close to what we support already today:

let localImage = new docker.RemoteImage("foo", {
    name: "lukehoban/foobar",
});

let targetImage = new docker.Image("doo", {
    localImageName: locaImage.name,
    imageName: "pulumi/foobar",
});

I would like a canonical way to do this in our builds - we have a separate build process that runs prior to Pulumi which creates the Docker images (some of that is for legacy reasons, but also because of what we do for local testing)

Right now we have to provide some way of letting Docker know how to build each image, per @XBeg9

@lukehoban your solution doesn't work, build parameter is required for docker.Image

I'd prefer it if I we could have the option to elide Docker image compilation altogether from Pulumi.

ljani commented 3 years ago

I think it would be beneficial to consider a Pulumi module for skopeo or crane, so one could copy the container images without having access to a Docker daemon.

For example, GitLab suggests using kaniko for building container images inside containers, if Docker daemon is not available. An example use case would be:

loganb commented 3 years ago

I spent a while trying to find the functionality described in this issue (before finding the issue!). Eventually I worked around the issue by making a docker_dummy directory with this Dockerfile:

ARG SOURCE_IMAGE

FROM ${SOURCE_IMAGE}

I then use Pulumi's buildAndPushImage, and pass the name of the image I want to publish as a build arg. Leaving this here for the next person.

johan-van-de-walle commented 1 year ago

Hi! The workaround that @loganb offered works, but it somehow feels odd that this is not something that would be offered. Any idea if this is something that it's being looked upon?

ecmonsen commented 1 year ago

I'd like this feature also. Thanks.

AaronFriel commented 1 year ago

I think the best way to implement a copying operation like this is likely to use the Command provider: https://www.pulumi.com/registry/packages/command/

const pushCommand = new local.Command("push image", {
    create: pulumi.interpolate`docker pull ${imageName} && docker push ${imageName}`,
});

That will also cover utilizing skopeo, kaniko, and other tools as part of a Pulumi program.

For the Docker Image resource, we're focusing on the use case of building and pushing an image based on a local context.

mattfysh commented 10 months ago

Thanks @loganb - messy, but gets the job done.

@AaronFriel I'm trying to use awsx.ecr.Image which wraps docker.Image plus adds some great DX when using an ECR repo. However, because the docker provider doesn't support push without build natively, I'm unable to use awsx without using the dummy Dockerfile as @loganb suggested

It seems like the provider should natively support push without build to cater for those of us who would prefer to seperate their build steps and deployment steps. The argument could be made that the push should occur at the tail end of the build step, but this is not possible if your destination details (e.g. ECR repository url) are only available from within your Pulumi program

AaronFriel commented 10 months ago

I believe the docker.RegistryImage resource might solve your push without build needs - it provides for pushing an image by name, and the triggers input property allows you to control when (e.g.: via digest) to run an additional push.

CC @blampe on supporting "push without build" scenarios.

mattfysh commented 10 months ago

Hey @AaronFriel - thanks for the reply! As I'm using AWS Crosswalk to deploy the image, I've raised a new issue over there to see if they are interested in supporting this feature: https://github.com/pulumi/pulumi-awsx/issues/1203