Closed regro-cf-autotick-bot closed 4 years ago
Hi! This is the friendly automated conda-forge-linting service.
I just wanted to let you know that I linted all conda-recipes in your PR (recipe
) and found it was in an excellent condition.
Looks like the requirements are the same. So we donβt need to update those.
Note #12. Iβd like to build v6.6.0 first before v7.
Though we may need to do something extra to support CUB.
Note #12.
Good catch. Thanks for the reminder! π
Iβd like to build v6.6.0 first before v7.
Yeah of course. Sorry I missed this one. Would you like to submit a PR? π
I assume that this needs something like:
skip: True # [py27 or (not linux64) or cuda_compiler_version == "None"]
There probably should be a version 6 branch since more releases are expected in the 6 line. Will Python 2 automatically select the highest version it has binaries for? I think it does.
No the selector will need to be updated to exclude Python 2. However this is easy enough to do.
True. Though it's fine if we want to get 6.6.0 in before merging this I think. Or do you see an issue with that?
Let me work on 6.6.0 later today.
See #17.
Oops, I meant to add py27
to the selector example, not win
...
Cool! One step ahead of me! Let us see if CI is happy there.
Or do you see an issue with that?
I think it would be better to accept #17 as a new branch, named "cupy6" or similar, so that we save one CI build, then accept this PR into master
. All branches get published, so that will keep a v6 line open. Not very important, though - you could accept 6.6 into master, then branch, and it just runs the CI one extra time. Might be easier to do.
Itβd be nice to keep a branch for v6 and master for v7 onward. I am just wondering if any PR merged to the v6 branch would trigger the package build. @jakirkham Do you know?
Oops, I meant to add Python 2 to the selector example, not "win"...
Ah ok. Yeah that makes sense.
I think it would be better to accept #17 as a new branch, named "cupy6" or similar, so that we save one CI build, then accept this PR into master.
We can do either. I'm somewhat partial to merging that PR and then creating the branch, but I don't have strong feelings about it.
Itβd be nice to keep a branch for v6 and master for v7 onward. I am just wondering if any PR merged to the v6 branch would trigger the package build.
Yep, that's actually how we manage multiple versions on a feedstock.
I'm somewhat partial to merging that PR and then creating the branch, but I don't have strong feelings about it.
Go ahead, it's probably simpler and doesn't waste that much. As long as you don't bump the build number, it won't store multiple versions of version 6 in anaconda.
(Technically two builds, since you'll need to rebase this PR since they both change the same values)
I just created a v6 branch. Now working on the skip for this PR...
@conda-forge-admin, please rerender.
Yep, that's actually how we manage multiple versions on a feedstock.
Glad to know, thanks for confirming @jakirkham!
Done, PTAL @jakirkham (and @henryiii if you have time π).
I'm happy if it passes. I assume we are really not testing very much on the GPU side, so if CUB is missing, will that show up, for example?
I'm happy if it passes. I assume we are really not testing very much on the GPU side, so if CUB is missing, will that show up, for example?
Darn it I forgot CUB, thanks for reminding me. π
No, if CUB is missing, CuPy will simply ignore and not build that module. I will add the CUB support later today, but for the record CUB will be built in soon (cupy/cupy#2584), so what I'm going to do is just a temporary workaround and will be reverted once that PR is merged...
@conda-forge-admin, please rerender
Hi! This is the friendly automated conda-forge-webservice.
I tried to re-render for you, but it looks like there was nothing to do.
CI failure seems irrelevant. Let me make the recipe a little more robust.
Including CUB temporarily here SGTM.
Some CI tests seem to fail due to network issues, re-kicking it.
@conda-forge-admin, please restart ci
@jakirkham All checks passed! It's in your hands now.
Maybe we should let @kmaehashi do the honors π
Maybe I should wait to ask till after this is merged, but what about cuTENSOR? Should that be added too?
Maybe I should wait to ask till after this is merged, but what about cuTENSOR? Should that be added too?
Someone will have to create a cuTENSOR-feedstock for us to depend on, like NCCL and cuDNN here. CUB can be supported without problem because it's header-only.
@jakirkham Speaking of NCCL, why isn't NCCL a runtime dependency in the recipe?
Speaking of NCCL, why isn't NCCL a runtime dependency in the recipe?
It has a run_exports, so it knows that it is needed at runtime. (Someone correct me if I'm wrong there!)
I had someone point out that it was present in the installation earlier today, they where asking where it was used (hoping for automatic multiGPU support, I think).
It has a run_exports, so it knows that it is needed at runtime. (Someone correct me if I'm wrong there!)
That's correct.
Basically this means that when it is added as a host
dependency in a recipe it will automatically be added as a run
dependency.
I had someone point out that it was present in the installation earlier today, they where asking where it was used (hoping for automatic multiGPU support, I think).
Kind of a side note, but it would be useful to collect this info about how other people are using CuPy in the wild. π
Kind of a side note, but it would be useful to collect this info about how other people are using CuPy in the wild. π
Yesterday I taught a Python & GPU mini course, and it featured CuPy pretty heavily. I included a "7.0 coming soon" note, and it came out that night. That's partially why there have been several CuPy discussions here recently.
Very cool! <shameless plug>
Now all it needs is a lesson on RAPIDS.</shameless plug>
π
@jakirkham @henryiii Thanks for the explanation on run_export
. I didn't know nccl has it.
But it is still confusing for this feedstock: Why doesn't cudnn have run_export
but nccl has? @jakirkham Do you mind I leave a short comment for this point in the recipe to remind my future self?
hoping for automatic multiGPU support, I think
nccl in cupy is just a thin wrapper over the C API. None of the cupy functions uses it afaik.
it would be useful to collect this info about how other people are using CuPy in the wild.
I recently gave a short talk on Python GPU ecosystem, in case you need more data points π
@jakirkham @henryiii Thanks for the explanation on
run_support
. I didn't know nccl has it.
Of course. Please let me know if you have more questions.
But it is still confusing for this feedstock: Why doesn't cudnn have
run_export
but nccl has?
Actually I think run_export
was added to cudnn
recently. ( https://github.com/AnacondaRecipes/aggregate/issues/173 ) Though we should check and see whether the versions we are using have it. Probably best saved as a follow-up. Maybe we can add an issue?
@jakirkham Do you mind I leave a short comment for this point in the recipe to remind my future self?
Not at all. Please feel free.
hoping for automatic multiGPU support, I think
nccl in cupy is just a thin wrapper over the C API. None of the cupy functions uses it afaik.
That said, this is actually very useful for people that want to tap into NCCL from Python.
it would be useful to collect this info about how other people are using CuPy in the wild.
I recently gave a short talk on Python GPU ecosystem, in case you need more data points π
Awesome! Thanks for the additional data point. More data is always welcome. π
Actually I think
run_export
was added tocudnn
recently. ( AnacondaRecipes/aggregate#173 ) Though we should check and see whether the versions we are using have it. Probably best saved as a follow-up. Maybe we can add an issue?
Oh that's nice, glad I asked! Based on the CI outputs I think we're already using cudnn 7.6.4 which has run_export
. Should I just make the change here, and create an issue for v6?
That said, this is actually very useful for people that want to tap into NCCL from Python.
Yes, I think that's the purpose. That wrapper is used in Chainer (although they just announced the plan for discontinuing Chainer...). We also use it extensively in our projects.
Overall, I think CuPy has got very good coverage over common CUDA APIs that PyCUDA and alternatives can no longer compete.
Of course. Please let me know if you have more questions.
@jakirkham How about going through the issues I created when you have time? π€£
and create an issue for v6
Can't you just make a PR to v6? I think it would be fine to drop those here - just rerun conda forge render and make sure it's still listing the pinned exports in the generated files (.ci_support/*
)
Checked locally, dropping cudnn does not change the CI files, so I think it's fine. Shouldn't even need to change the build number.
@conda-forge-admin, please rerender
Hi! This is the friendly automated conda-forge-webservice.
I tried to re-render for you, but it looks like there was nothing to do.
@henryiii Thanks a lot for your help! You know conda-forge much better than I do π I think you're right. Just made the changes here and in #18. Let's see if CI passes.
Linking the run_export
discussion https://github.com/conda-forge/cupy-feedstock/pull/18#discussion_r354670446 back to here. @jakirkham suggested we need to revert 67cba03 and wait for the change in upstream pinnings. this would work: https://github.com/conda-forge/cupy-feedstock/pull/18#discussion_r354677782.
@conda-forge-admin, please rerender
Weird, now pin_run_as_build
is gone? I thought we have run_export
for cudnn=7.6.4
?
Yep, that's expected as run_exports
is taking over, which does the same thing. The only difference is the info is now encoded in the cudnn
package as opposed to the variant files here.
All CI tests passed! @conda-forge/cupy PTAL.
Close #12.
It is very likely that the current package version for this feedstock is out of date. Notes for merging this PR:
license_file
is packagedNote that the bot will stop issuing PRs if more than 3 Version bump PRs generated by the bot are open. If you don't want to package a particular version please close the PR
If this PR was opened in error or needs to be updated please add the
bot-rerun
label to this PR. The bot will close this PR and schedule another one.This PR was created by the cf-regro-autotick-bot. The cf-regro-autotick-bot is a service to automatically track the dependency graph, migrate packages, and propose package version updates for conda-forge. If you would like a local version of this bot, you might consider using rever. Rever is a tool for automating software releases and forms the backbone of the bot's conda-forge PRing capability. Rever is both conda (
conda install -c conda-forge rever
) and pip (pip install re-ver
) installable. Finally, feel free to drop us a line if there are any issues! This PR was generated by https://circleci.com/gh/regro/circle_worker/12630, please use this URL for debugging