Closed ngam closed 2 years ago
This is already done indirectly. How we handle this is:
compiler("cuda")
cudatoolkit
constraintcudatoolkit
constrains __cuda
Yes, but it is not forceful. Either way, I think it is reasonable to expect only downstream packages to implement the __cuda
constraint. As I said elsewhere, I personally don't like the __cuda
constraint AT ALL. I just intuitively think it is best applied at the source (e.g. cudatoolkit, cudnn, nccl) and not tensorflow/pytorch. Others disagree. I am quite happy to support the majority in this case and enthusiastically so
By not forceful do you mean run_constrained
instead of run
? If so, please read issue ( https://github.com/conda/conda/issues/9115 ) for context
yes, that's exactly what I mean.
Also, I saw your comment directing me to read that issue in the other thread --- sorry, but that issue is really not informative. I am not sure what you are seeing in it as a valuable resource or context; it is a mess. If you have something to say, please say it explicitly here. There is no reason to go read that outdated issue.
Either way, I really don't care about this one way or another, hence I closed this issue. The purpose of this issue wasn't to litigate the whole of __cuda
all over again; all I was trying to say: We add __cuda
under run
for tensorflow and pytorch (soon). On face value, that's a weird decision. Tensorflow is not a "cuda" package by itself, it's only a "cuda" package because it depends on "cuda" packages (namely, nccl, cudnn, and cudatoolkit). In my understanding, it would make much more sense to add the run
condition here, not downstream. But the effective result is the same, so it really doesn't matter. However, it may help streamline the addition of more "cuda" packages in the future so they all behave similarly ecosystem-wide.
Is mine a practical concern? Idk. Do I care about backward compatibility with ancient conda packages? Nope. Has my idea gained any traction in this community? Not. At. All. 😄
Comment:
No response