Open h-vetinari opened 5 days ago
What was the reason we were not using multiarch images? For easier pinnings? 🤔
edit: nvm, found it.
I wanted to see what this looked like in the pinning, so I opened a demo PR. However, while putting that together, I realized that we can do even better, by using the same image tags as we use for our distro-naming, so we can just directly insert DEFAULT_LINUX_VERSION
. This drastically shortens (see here) the specification in the pinning, because we avoid having to respecify per image version:
docker_image: # [os.environ.get("BUILD_PLATFORM", "").startswith("linux-")]
# images for non-CUDA-enabled builds
- quay.io/condaforge/linux-anvil-x86_64:{{ environ.get("DEFAULT_LINUX_VERSION", "alma9") }} # [os.environ.get("BUILD_PLATFORM") == "linux-64"]
- quay.io/condaforge/linux-anvil-aarch64:{{ environ.get("DEFAULT_LINUX_VERSION", "alma9") }} # [os.environ.get("BUILD_PLATFORM") == "linux-aarch64"]
- quay.io/condaforge/linux-anvil-ppc64le:{{ environ.get("DEFAULT_LINUX_VERSION", "alma9") }} # [os.environ.get("BUILD_PLATFORM") == "linux-ppc64le"]
# images for CUDA 11.8 builds (no choice via DEFAULT_LINUX_VERSION available)
- [omitted here]
# images for CUDA 12 builds
- quay.io/condaforge/linux-anvil-x86_64:{{ environ.get("DEFAULT_LINUX_VERSION", "alma9") }} # [linux64 and os.environ.get("CF_CUDA_ENABLED", "False") == "True" and os.environ.get("BUILD_PLATFORM") == "linux-64"]
# case: native compilation (build == target)
- quay.io/condaforge/linux-anvil-aarch64:{{ environ.get("DEFAULT_LINUX_VERSION", "alma9") }} # [aarch64 and os.environ.get("CF_CUDA_ENABLED", "False") == "True" and os.environ.get("BUILD_PLATFORM") == "linux-aarch64"]
- quay.io/condaforge/linux-anvil-ppc64le:{{ environ.get("DEFAULT_LINUX_VERSION", "alma9") }} # [ppc64le and os.environ.get("CF_CUDA_ENABLED", "False") == "True" and os.environ.get("BUILD_PLATFORM") == "linux-ppc64le"]
# case: cross-compilation (build != target)
- quay.io/condaforge/linux-anvil-x86_64:{{ environ.get("DEFAULT_LINUX_VERSION", "alma9") }} # [aarch64 and os.environ.get("CF_CUDA_ENABLED", "False") == "True" and os.environ.get("BUILD_PLATFORM") == "linux-64"]
- quay.io/condaforge/linux-anvil-x86_64:{{ environ.get("DEFAULT_LINUX_VERSION", "alma9") }} # [ppc64le and os.environ.get("CF_CUDA_ENABLED", "False") == "True" and os.environ.get("BUILD_PLATFORM") == "linux-64"]
I've updated the OP to reflect that proposal. This also has the advantage that if we ever do have different distros per generation, the image tag can absorb that distinction.
I've tried rerendering an affected feedstock with the setup from https://github.com/conda-forge/conda-forge-pinning-feedstock/pull/6687, and I think this might not work. AFAICT, using jinja variables in conda_build_config.yaml
is not supported by conda-build? (or at least by smithy, which hand-rolls some conda-build functionality for being able to generate the variants).
In other words, the variables don't get resolve, but rather just get inserted as follows
--- a/.azure-pipelines/azure-pipelines-linux.yml
+++ b/.azure-pipelines/azure-pipelines-linux.yml
@@ -11,39 +11,45 @@ jobs:
linux_64_cuda_compilerNonecuda_compiler_versionNonecxx_compiler_version13:
CONFIG: linux_64_cuda_compilerNonecuda_compiler_versionNonecxx_compiler_version13
UPLOAD_PACKAGES: 'True'
- DOCKER_IMAGE: quay.io/condaforge/linux-anvil-alma-x86_64:9
+ DOCKER_IMAGE: quay.io/condaforge/linux-anvil-x86_64:{{ environ.get("DEFAULT_LINUX_VERSION",
+ "alma9") }}
linux_64_cuda_compilercuda-nvcccuda_compiler_version12.0cxx_compiler_version12:
CONFIG: linux_64_cuda_compilercuda-nvcccuda_compiler_version12.0cxx_compiler_version12
UPLOAD_PACKAGES: 'True'
which isn't going to work. 😑
Following the core call today, we ended up with the following set
- quay.io/condaforge/linux-anvil-x86_64:{cos7,alma8,alma9}
- quay.io/condaforge/linux-anvil-aarch64:{cos7,alma8,alma9}
- quay.io/condaforge/linux-anvil-ppc64le:{cos7,alma8,alma9}
- quay.io/condaforge/linux-anvil-x86_64-cuda11.8:{cos7,ubi8}
- quay.io/condaforge/linux-anvil-aarch64-cuda11.8:ubi8
- quay.io/condaforge/linux-anvil-ppc64le-cuda11.8:ubi8
As of #287 / #290 / #291, we'll have the following images
I think it would be nice to could consolidate this to (updated!; v0 below)
This has the advantage that we don't have to duplicate the specification in the pinning across image versions, because we can just directly insert
DEFAULT_LINUX_VERSION
through jinja (see comment below).Old proposal
``` - quay.io/condaforge/linux-anvil-x86_64:{7,8,9} # cos7 / alma8 / alma9 - quay.io/condaforge/linux-anvil-aarch64:{7,8,9} # cos7 / alma8 / alma9 - quay.io/condaforge/linux-anvil-ppc64le:{7,8,9} # cos7 / alma8 / alma9 - quay.io/condaforge/linux-anvil-x86_64-cuda:11.8 # ubi8 + CUDA 11.8 - quay.io/condaforge/linux-anvil-aarch64-cuda:11.8 # ubi8 + CUDA 11.8 - quay.io/condaforge/linux-anvil-ppc64le-cuda:11.8 # ubi8 + CUDA 11.8 ``` This is because the introduction of another distro for the same RHEL generation (e.g. rocky8 next to alma8) is completely unrealistic at the moment, so there's no need to encode the distro in the image name. And if that time ever comes, we could still rename the images again. ---If we want to encode the distro version also in the CUDA images, we could do
but I think this is not necessary because the CUDA 11.8 images are on the way out. It would only become relevant IMO if some dependency starts requiring
__glibc>=2.34
while we're still supporting 11.8 (though we could also just move the CUDA 11.8 images to ubi9 universally in that case, c.f. https://github.com/conda-forge/conda-forge-pinning-feedstock/issues/6283). Note that there are no ubi9 images yet (and neither are there cos7 images for aarch/ppc).