GoogleContainerTools / kaniko

Build Container Images In Kubernetes
Apache License 2.0
14.75k stars 1.43k forks source link

Private repos inside Dockerfile #2930

Closed ustal closed 9 months ago

ustal commented 9 months ago

Hi, could you helm me with next question.

I have Jenkins over k8s. I'm going to build images during CI. I have no problems except one:

In the Dockerfile (target file) I have source code, that could be built and has a dependency on vendor code, that is private. So, in general, I have public dependencies like php packages, and private source code from other git versions: github, bitbucket etc... but all of them could be downloaded via ssh and managed via the package manager. In the case of PHP, we have a composer that downloads all private and public dependencies.

In general, Docker has mount types for "sensitive content like ssh keys" or we could use multistage (1 stage put ssh keys in not secure approach, and the second part copies all code in the final artefact without SSH keys).

How to make it possible? So I need to pass "SSH keys" from Jenkins or k8s secrets into Kaniko, and then from Kaniko inside the build step somehow.

erikdao commented 1 week ago

@ustal Hi, I guess this is a bit late, but how did you solve this issue? I'm in a similar situation.

ustal commented 1 week ago

@ustal Hi, I guess this is a bit late, but how did you solve this issue? I'm in a similar situation.

To understand how it works -> imagine Kaniko as an archiver. So any target docker image (what you trying to build in Kaniko image/container/pod) is just an recipe "how to archive data into the archive called Docker Image". So all keys, that you need inside the target Docker -> you should put into Kaniko. Also target image has the same filesystem (not LIKE, or COPY, but the same) as Kaniko.

So on Host machine (for example laptop) we copy keys or mount them from host into the Docker and all data stored in docker/volumes. In case of Kaniko, if you mount ssh key into /home/kaniko/.ssh, they will accessible in the target docker on the same path and not be publish into the final image.

In my case I mount (via best k8s practice) configmaps/secrets to Kaniko and during final build I use those keys (in case of default places like /home/{thesameuser}/.ssh/. Or, in case of special user like Jenkins (but the best way was using the same user) /home/jenkins/.ssh/.

So my EKS cluster download Kaniko from my private repo (ECR) using AWS IAM etc (basic configuration for cluster access) and using another access to PUT images into the same ECR AWS Account. All keys (for github, bitbucket, AWS codecommit) were mounted only to Kaniko.

There may be another way to do that, but any attempt to copy keys from Kaniko into the Final Doker Image (COPY, ADD, mount as secret) failed. As I remember due to user permissions (Kaniko filesystem and final Docker image filesystem have the same path ~/.ssh) and in the case of "root/root" (unfortunately, this is the best way from Kaniko perspective, but not from k8s). And Kaniko (root/root) with Final Docker Image (jenkins/jenkins) not work friendly :(

Put Kaniko into your own repo to avoid BC in case of a new version or stick strictly to the version that you use. Just in case what I described was not a feature but a bug.

Feel free to ask any questions, maybe I could help.