Closed jaimergp closed 3 years ago
Hi! This is the friendly automated conda-forge-linting service.
I just wanted to let you know that I linted all conda-recipes in your PR (recipe
) and found it was in an excellent condition.
The migration file looks OK to me, though I don't know how to kick off a migration process so I am not sure if this is the right place to do.
I am bringing over the files submitted to https://github.com/conda-forge/conda-forge-pinning-feedstock/pull/1162. Docs drafting this process are available at https://conda-forge.org/docs/maintainer/knowledge_base.html#adding-support-for-a-new-cuda-version. It involves manual migrations to create the CUDA infrastructure first.
One thing I am wondering, though, is that given we have to do a migration for CUDA 11.1/11.2 anyway, can't we just merge
.ci_support/migrations/windows_cuda111_112.yaml
into.ci_support/migrations/cuda111_112.yaml
and use a selector like# [linux64 or win]
to do the job?
Yep, this was suggested in that other PR and will bring the changes here once that's accepted!
I merged the migrators and re-rendered, which made no changes, which is a good thing :)
I am bringing over the files submitted to conda-forge/conda-forge-pinning-feedstock#1162. Docs drafting this process are available at https://conda-forge.org/docs/maintainer/knowledge_base.html#adding-support-for-a-new-cuda-version. It involves manual migrations to create the CUDA infrastructure first.
Ah right, thanks for reminding me Jaime! I was searching for documentation on a general migration procedure but missed this one (which is all I want to know) 😅
This is all passing now, given that we have the necessary bits in place (Docker images, Windows installer, cudatoolkits).
The remaining issue is that we are using the full version matrix from the migrator at https://github.com/conda-forge/conda-forge-pinning-feedstock/pull/1162. In that PR, we are discussing which versions to include, but lettings users opt-in for more versions via conda_build_config.yaml
. Still, that means we need to build all the nvcc
packages so, how do we approach this?
conda_build_config.yaml
. I don't know how this works, but I guess we'll find out soon...Note that 10.2 is getting the Docker image for 9.2 for some reason. I need to expand the matrix again using a custom conda_build_config.yaml
too, but my local efforts so far have not succeeded, so I haven't pushed anything yet.
# When adding or removing cuda versions, make sure that the following entries are "zipped";
# e.g. each entry in cuda_compiler_version must have a matching entry in the other keys,
# considering the effect of the selector:
# cuda_compiler_version
# cudnn
# cdt_name
# docker_image
cuda_compiler_version:
- None # this is for cpu-only builds
- 9.2 # [linux64]
- 10.0 # [linux64 or win]
- 10.1 # [linux64 or win]
- 10.2 # [linux64 or win]
- 11.0 # [linux64 or win]
- 11.1 # [linux64 or win]
- 11.2 # [linux64 or win]
docker_image: # [os.environ.get("BUILD_PLATFORM", "").startswith("linux-")]
# start cuda_compiler_version == None
- quay.io/condaforge/linux-anvil-comp7 # [os.environ.get("BUILD_PLATFORM") == "linux-64"]
- quay.io/condaforge/linux-anvil-aarch64 # [os.environ.get("BUILD_PLATFORM") == "linux-aarch64"]
- quay.io/condaforge/linux-anvil-ppc64le # [os.environ.get("BUILD_PLATFORM") == "linux-ppc64le"]
- quay.io/condaforge/linux-anvil-armv7l # [os.environ.get("BUILD_PLATFORM") == "linux-armv7l"]
# end of cuda_compiler_version == None
- quay.io/condaforge/linux-anvil-cuda:9.2 # [linux64 and os.environ.get("BUILD_PLATFORM") == "linux-64"]
- quay.io/condaforge/linux-anvil-cuda:10.0 # [linux64 and os.environ.get("BUILD_PLATFORM") == "linux-64"]
- quay.io/condaforge/linux-anvil-cuda:10.1 # [linux64 and os.environ.get("BUILD_PLATFORM") == "linux-64"]
- quay.io/condaforge/linux-anvil-cuda:10.2 # [linux64 and os.environ.get("BUILD_PLATFORM") == "linux-64"]
- quay.io/condaforge/linux-anvil-cuda:11.0 # [linux64 and os.environ.get("BUILD_PLATFORM") == "linux-64"]
- quay.io/condaforge/linux-anvil-cuda:11.1 # [linux64 and os.environ.get("BUILD_PLATFORM") == "linux-64"]
- quay.io/condaforge/linux-anvil-cuda:11.2 # [linux64 and os.environ.get("BUILD_PLATFORM") == "linux-64"]
cudnn:
# start cuda_compiler_version == None
- undefined
# end cuda_compiler_version == None
- 7 # [linux64] # CUDA 9.2
- 7 # [linux64 or win] # CUDA 10.0
- 7 # [linux64 or win] # CUDA 10.1
- 7 # [linux64 or win] # CUDA 10.2
- 8 # [linux64 or win] # CUDA 11.0
- 8 # [linux64 or win] # CUDA 11.1
- 8 # [linux64 or win] # CUDA 11.2
cdt_name: # [linux]
# start cuda_compiler_version == None
- cos6 # [linux64]
- cos7 # [linux and aarch64]
- cos7 # [linux and ppc64le]
- cos7 # [linux and armv7l]
# end cuda_compiler_version == None
- cos6 # [linux64] # CUDA 9.2
- cos6 # [linux64] # CUDA 10.0
- cos6 # [linux64] # CUDA 10.1
- cos6 # [linux64] # CUDA 10.2
- cos7 # [linux64] # CUDA 11.0
- cos7 # [linux64] # CUDA 11.1
- cos7 # [linux64] # CUDA 11.2
@jaimergp CUDA 11.2.1 is out...https://docs.nvidia.com/cuda/cuda-toolkit-release-notes/index.html#cuda-whats-new
... yay? :D Now we have to add the new versions to cudatoolkit-feedstock
, docker-images
, conda-forge-ci-setup
too. I'll add that to the tracking issue.
I suppose we could skip 11.2.0 and just use 11.2.1, thoughts?
But the packages are already out...
Yeah well, let's wait for 11.3 then? Given the unlikelyhood of spanning the build matrix...
I think we can have 11.2.1
just fine. conda
will get the latest one within major.minor
. The only problem is that we might need to change how cudatoolkit-feedstock
handles building patch versions, but this doesn't belong here. nvcc
should work just fine for both .0
and .1
.
I think you're referring to runtime dependency, but there is also build time dependency to address. For example, cuSPARSE got significantly expanded and improved in 11.2.1, which is otherwise invisible to applications compiled with CUDA <= 11.2.0, so those applications (I am unaware of any, so just hypothetically) would need to be built with CUDA 11.2.1 to take advantage of new features.
Build time deps would be provided through docker-images (Linux) and conda-forge-ci-setup (Windows). In that case, can we replace 11.2.0 with 11.2.1?
Also, we have pretty complete control over the build dependencies, right**? I mean, the migrator hasn't even started yet. So as long as 11.2.1 is available at the time of the migration, the improvements @leofang speaks of should already be picked up?
** E.g., is it possible to add a run-export for cudatoolkit >=11.2.1,<11.3
to the 11.2 migrator? That way, the packages would be built with 11.2.1, and installing a package built with that would definitely upgrade whatever cudatoolkits =11.2.0
are already floating around.
But @jaimergp @h-vetinari aren't we proposing effectively the same idea that we just use 11.2.1 throughout the CF infrastructure? What's the difference between your replies and my "skipping 11.2.0 and jumping to 11.2.1"?
Yes, I think we are in time to skip 11.2.0 being used for builds. We only need to update the build-time bits in the repos mentioned at https://github.com/conda-forge/cudatoolkit-feedstock/issues/47.
The only tricky part might happen at cudatoolkit-feedstock
where we might need to overwrite the dict entry for 11.2 so it points to 11.2.1, not 11.2.0, which will imply that 11.2.0 would not receive further updates done in the recipe. The alternative is to refactor a bit so the dict key is not major.minor
but optionally major.minor.patch
. Not a big deal, just something to take into account in the PR.
What's the difference between your replies and my "skipping 11.2.0 and jumping to 11.2.1"?
No difference worth mentioning, just a bit of care around what's already live.
The only tricky part might happen at
cudatoolkit-feedstock
where we might need to overwrite the dict entry for 11.2 so it points to 11.2.1, not 11.2.0, which will imply that 11.2.0 would not receive further updates done in the recipe.
I think that would be completely fine.
@conda-forge-admin, rerender
Hi! This is the friendly automated conda-forge-webservice. I tried to rerender for you, but it looks like there was nothing to do.
We don't want to merge this without building all the variants, right? We have to expand the build matrix for all versions.
Hi! This is the friendly conda-forge automerge bot!
I considered the following status checks when analyzing this PR:
Thus the PR was passing and merged! Have a great day!
Thanks Jaime! 😀
Checklist
0
(if the version changed)conda-smithy
(Use the phrase code>@<space/conda-forge-admin, please rerender in a comment in this PR for automated rerendering)Supersedes #50