Open pditommaso opened 1 month ago
Adding @jordeu for visibility
@munishchouhan We should make a POC simulating a build process pulling the data from S3 via Fusion using a local container
I started working on this today with @pabloaledo, we found the couple of things:
build is still not working, but we can see in the builder container that s3 content has been mounted inside container
I will keep on updating here for discussion
wave -i moby/buildkit:v0.14.1-rootless --include <fusion scratch image>
😎 we created using
wave --config-file <fusion config file URL> -i moby/buildkit:v0.15.0
Same 👍
Another point:
--entrypoint buildctl-daemonless.sh
, which is overriding fusion entrypointI am able to fix the entrypoint issue by creating a custom image with one entrypoint=''
between buildkitd and fusion
That happens because you are using --config-file
approach
build is working but push is failing because of the lack of config.json file Working on how to add config.json in container
It is supposed to be in the bucket along with the Dockerfile
It is supposed to be in the bucket along with the Dockerfile
yes, but it need to be mounted to /root/.docker folder
Indeed, that's not simple to solve. Tagging @fntlnz, he may have some suggestion
About the problem of mounting /root/.docker
likely it's not possible because Fusion used its own opinionated path prefix. Maybe should consider instead using Mountpoint. @jordeu What do you think?
Unfortunately even if fusion can change the mount dir with -mount-point
flag it it has the second level directory which is the name of the remote storage (e.g: s3).
However it's easy to use a different directory for docker config so this works with fusion
sudo DOCKER_CONFIG=/fusion/s3/fusion-develop/scratch docker build -t myimage .
here is how it looks like on s3.
So I would say, just mount fusion as it is and tell the docker cli to point to it.
DOCKER_CONFIG
sounds a good trick
thanks @fntlnz DOCKER_CONFIG
did the trick
Working on code changes now
This change also requires changes in Scan process and for singularity builds too. I have created draft PR with changes in build process and tested it too, it works with dockerfile
Singularity build and push are working using docker Now i will work on scan
Scan, SIngularity and build process with docker works with s3 now I will work to move k8s to s3 now
Context
Currently, the build process relies on a shared file system (AWS EFS). In a nutshell, the process is the following:
Deliverable
The goal of this issue is to replace the use of the shared file system with an object storage e.g. S3 in order to:
Solution
This could be achieved:
/tmp
as work directory required by Buildkit