Open regro-cf-autotick-bot opened 3 months ago
Hi! This is the friendly automated conda-forge-linting service.
I just wanted to let you know that I linted all conda-recipes in your PR (recipe
) and found it was in an excellent condition.
@conda-forge-admin, please rerender
Hi! This is the friendly automated conda-forge-linting service.
I just wanted to let you know that I linted all conda-recipes in your PR (recipe/meta.yaml
) and found it was in an excellent condition.
@conda-forge-admin, please rerender
Haven't seen such a failure before:
[ 48%] Building NVCC (Device) object AmberTools/src/quick/src/libxc/maple2c_device/CMakeFiles/xc_cuda.dir/__/__/cuda/xc_cuda_generated_gpu_getxc.cu.o
sh: cicc: command not found
CMake Error at xc_cuda_generated_gpu_getxc.cu.o.Release.cmake:278 (message):
Error generating file
/home/conda/feedstock_root/build_artifacts/ambertools_1723188819308/work/build/AmberTools/src/quick/src/libxc/maple2c_device/CMakeFiles/xc_cuda.dir/__/__/cuda/./xc_cuda_generated_gpu_getxc.cu.o
Looks like this is a known issue and we need to point to $PREFIX/nvvm/bin/cicc
I gather from the diff that the changes here cover Windows but Windows support isn't added?
I gather from the diff that the changes here cover Windows but Windows support isn't added?
The migrator would be adding CUDA 12.0 builds on windows, if windows weren't skipped completely here. That's okay though, it's just the default title of PRs opened by this migrator. Actual windows enablement should be done independently from this PR.
OK, moved past the cicc
issue, now getting:
[ 58%] Building NVCC intermediate link file AmberTools/src/quick/src/libxc/maple2c_device/CMakeFiles/xc_cuda.dir/xc_cuda_intermediate_link.o
cc1plus: fatal error: $BUILD_PREFIX/targets/x86_64-linux/bin/crt/link.stub: No such file or directory
compilation terminated.
I cannot tell from the recipe where things would refer to link.stub
; I'm presuming this should be part of the cuda-nvcc
setup, but there, that stub is under $BUILD_PREFIX/bin/crt/link.stub
. Could something still be configured incorrectly @conda-forge/cuda?
Sorry for the slow reply here Axel
Discussed this with my colleagues today
When we have seen similar issues before, they have tended to trace back to using the legacy CMake find_package(CUDA)
, which is deprecated. In these cases, libraries are recommended to move to adding the CUDA language where appropriate and start using find_package(CUDAToolkit REQUIRED)
to pick up any CTK contents for linking into relevant artifacts. It's possible that other steps may need to be as well ( https://github.com/scopetools/cudadecon/pull/29#issuecomment-1832097363 )
Am not entirely sure the right place to look at the source code for ambertools, but was able to find the Amber-MD GitHub org, which references the webpage ( https://ambermd.org/ ) used in downloads here
Looking in that org do see usage of find_package(CUDA)
. So think the first step would be for ambertools to complete this upgrade
As an interesting note did see this comment in that ambertools code:
# With CMake 3.7, FindCUDA.cmake crashes when crosscompiling.
if(CROSSCOMPILE)
message(STATUS "CUDA disabled when crosscompiling.")
set(CUDA FALSE)
else()
One of the things the CMake team solved by adding the CUDA language and find_package(CUDAToolkit)
was cross-compilation support with CUDA. In fact this was done with an eye toward working in Conda with the Conda compilers
Think to move this forward, would recommend working with upstream to adopt these changes. Possibly the build here can be patched to use those upstream changes (though it may be simpler to update to a new release with the build fixes)
cc @robertmaynard @bdice (for awareness & in case revisions to the above are needed)
As far as I recall, the canonical source code is non-public and on GitLab. But it's several projects stapled together, including cpptraj
which is hosted here, so what you found there occurs at least once (probably several times).
cc: @dacase who is likely the best person to coordinate making any needed changes away from deprecated calls
Thanks for the analysis John!
So I downloaded the tarball (man there's a lot of stuff in there; a cool 3GB when unpacked, and a mass of vendored bits), and searched for the occurrences of find_package(CUDA
:
>findstr /L /S /N /C:"find_package(CUDA" *.*
AmberTools\src\cpptraj\cmake-cpptraj\CudaConfig.cmake:11: find_package(CUDA)
AmberTools\src\quick\cmake\CudaConfig.cmake:11: find_package(CUDA)
AmberTools\src\quick\quick-cmake\QUICKCudaConfig.cmake:11: find_package(CUDA REQUIRED)
cmake\CudaConfig.cmake:11: find_package(CUDA)
Given that there's only 4, this sounds quite patchable.
The "canonical source code" is actually public, and available here: https://ambermd.org/downloads/AmberTools24_rc5.tar.bz2
[It has an odd file name just because we encourage most folks to download via a web link, so that we can monitor total usage. But it's all open source, and the link above was actually made to be used by conda builds, among other purposes.]
I'm not sure how @jakirkham (above) found the corresponding link to AmberTools23, but that is outdated.
If it is indeed the case that find_package(CUDAToolkit REQUIRED)
helps, we can easily make that change to the source code. I'm completely out of bounds in thinking about what is required for Windows support.
Thanks David! 🙏
Does this live in a source controlled repo somewhere (GitHub, GitLab, Bitbucket, etc.)?
The version of the file is just coming from the recipe. It looks like Matt started a PR to update ( https://github.com/conda-forge/ambertools-feedstock/pull/141 ). Though appears that ran into build issues
AIUI this feedstock doesn't build Windows ( https://github.com/conda-forge/ambertools-feedstock/pull/148#issuecomment-2278637337 ). So that is not relevant
In any event, yes, migrating to find_package(CUDAToolkit REQUIRED)
would be quite helpful for just getting build with CUDA 12+ here working
It would also help when building via cross-compilation for other Linux architectures (like ARM)
Am renaming the PR to avoid further confusion. Hope that is ok
Well, I'm several patches deep into trying to make this work, and I think I'm hitting a CMake bug.
Surely it would be better to do less hacky changes in AmberTools upstream; I was mainly trying to see what would be necessary to unblock the build and tried to keep patching ~minimal, at least conceptually (feel free to pick up anything, though these were not really written with being upstreamed in mind - not least because there's no public repo to contribute to - but rather as the most immediately necessary fixes to overcome the failures here).
Just to avoid any confusion - this PR would have to be for AmberTools 23 until #141 or a similar build is complete, so using the AmberTools23_rc6.tar.bz2
blob is the only option. Since building AmberTools 24 is stalled, updating it for these CUDA changes can't happen with that version.
Am deeply impressed by the amount of effort you spent patching here Axel! 🙏
Subscribed to that issue. Though it looks like my colleague Rob already replied to you over there. Agree with him we likely need enable_language
That all being said, agree this is work probably best taken on upstream. Think the other pieces you included here are a good starting point for anyone wanting to push this forward
Agreed Matt. Was trying to capture that in my comments above. Apologies if that was too muddled with other details
Just to avoid any confusion - this PR would have to be for AmberTools 23 until #141 or a similar build is complete, so using the
AmberTools23_rc6.tar.bz2
blob is the only option.
This is what I've been doing, the sources are unchanged in this PR.
👍 yep just wanted to be sure we were all on the same page, that comment was mostly to explain to David why this is being applied to 23, not 24
Well, I got things to build, but then run into:
$SRC_DIR/AmberTools/src/quick/src/cuda/gpu_getxc.h: error: no instance of overloaded function "atomicAdd" matches the argument list
Does the header in question have #include <cooperative_groups.h>
?
That seems like the kind of thing we would need. It is also covered in this blogpost
Also worth noting this header lives in cuda-cudart-dev_{{ target_platform }}
, which {{ compiler("cuda") }}
pulls in as a dependency. So it should be available
This PR has been triggered in an effort to update cuda120.
Notes and instructions for merging this PR:
Please note that if you close this PR we presume that the feedstock has been rebuilt, so if you are going to perform the rebuild yourself don't close this PR until the your rebuild has been merged.
Here are some more details about this specific migrator:
If this PR was opened in error or needs to be updated please add the
bot-rerun
label to this PR. The bot will close this PR and schedule another one. If you do not have permissions to add this label, you can use the phrase code>@<space/conda-forge-admin, please rerun bot in a PR comment to have theconda-forge-admin
add it for you.This PR was created by the regro-cf-autotick-bot. The regro-cf-autotick-bot is a service to automatically track the dependency graph, migrate packages, and propose package version updates for conda-forge. Feel free to drop us a line if there are any issues! This PR was generated by - please use this URL for debugging.