A high-performance computing environment uses Singularity container technology to execute workloads. Singularity is a daemon-less container solution.
Dragonfly p2p solution works well with singularity, except for the fact that the cache gets duplicated on the same machine when the image gets pulled. That means, /root/.small-dragonfly/data will have multiple copies of the same image for each pull. This wastes space as well as download time.
Command to pull the image:
singularity pull docker://harbor.domain.com/<repo>/<image>:<tag>
The downloaded file should be shared between different users on the same host, provided that they were successfully authenticated against the registry. This will save space as well as download time.
Other related information
@jim3ma mentioned to me that he is already working on this and it is part of the v2.0.0 roadmap. I have added it here for reference.
Why you need it?
A high-performance computing environment uses Singularity container technology to execute workloads. Singularity is a daemon-less container solution. Dragonfly p2p solution works well with singularity, except for the fact that the cache gets duplicated on the same machine when the image gets pulled. That means, /root/.small-dragonfly/data will have multiple copies of the same image for each pull. This wastes space as well as download time.
Command to pull the image:
singularity pull docker://harbor.domain.com/<repo>/<image>:<tag>
or
singularity pull oras://harbor.domain.com/<repo>/<image>:<tag>
How it could be?
The downloaded file should be shared between different users on the same host, provided that they were successfully authenticated against the registry. This will save space as well as download time.
Other related information
@jim3ma mentioned to me that he is already working on this and it is part of the v2.0.0 roadmap. I have added it here for reference.