conan-io / conan-center-index

Recipes for the ConanCenter repository
https://conan.io/center
MIT License
969 stars 1.78k forks source link

[request] cuda/11.2 #4844

Open SpaceIm opened 3 years ago

SpaceIm commented 3 years ago

Package Details

Description Of The Library / Tool

CUDA® is a parallel computing platform and programming model developed by NVIDIA for general computing on graphical processing units (GPUs).

More information

There was a dicussion about a cuda recipe here https://github.com/conan-io/wishlist/issues/235

I think that it should not be like current "system" recipes (I mean that version should be tracked). The recipe should ensure that proper cuda version is already installed on the system and raise if not. It should very likely not try to emulate/override findCUDA.cmake, which is complex https://cmake.org/cmake/help/latest/module/FindCUDA.html With machine learning libs being packaged, it's just not acceptable to unconditionally disable CUDA in these recipes (I'm not a data scientist, but worked with several people in this field, and I can't remember someone not using CUDA). Obviously, these cuda options should be disabled by default, since it's a non portable feature, but at least consumers with CUDA capable GPU could enable them.

blackliner commented 3 years ago

The runfiles can technically be downloaded and extracted, don't know how to use that "rootfs" from there:

wget https://developer.download.nvidia.com/compute/cuda/11.4.0/local_installers/cuda_11.4.0_470.42.01_linux.run
sh cuda_11.4.0_470.42.01_linux.run --noexec --target /some/absolute/path

one teeny tiny issue I have with downloading a 2GB and 6GB extracted bunch-o-files though: Our devs regularly purge the content of ~/.conan to recover from various conan issues, which would lead to a lot of bandwith consumption on their end (some have really slow downlinks). Same for CI, which would need some proper caching for such big files (currently using a shared conan_download_cache). So I am not 100% convinced about CUDA being useful as a conan package :-/

Tumb1eweed commented 2 years ago

Package Details

Description Of The Library / Tool

CUDA® is a parallel computing platform and programming model developed by NVIDIA for general computing on graphical processing units (GPUs).

More information

There was a dicussion about a cuda recipe here conan-io/wishlist#235

I think that it should not be like current "system" recipes (I mean that version should be tracked). The recipe should ensure that proper cuda version is already installed on the system and raise if not. It should very likely not try to emulate/override findCUDA.cmake, which is complex https://cmake.org/cmake/help/latest/module/FindCUDA.html With machine learning libs being packaged, it's just not acceptable to unconditionally disable CUDA in these recipes (I'm not a data scientist, but worked with several people in this field, and I can't remember someone not using CUDA). Obviously, these cuda options should be disabled by default, since it's a non portable feature, but at least consumers with CUDA capable GPU could enable them.

Can we use approach like Anaconda, spliting CUDA into smaller pieces? I don't know it is a good method or not. link: https://anaconda.org/nvidia/repo

Tumb1eweed commented 2 years ago

The runfiles can technically be downloaded and extracted, don't know how to use that "rootfs" from there:

wget https://developer.download.nvidia.com/compute/cuda/11.4.0/local_installers/cuda_11.4.0_470.42.01_linux.run
sh cuda_11.4.0_470.42.01_linux.run --noexec --target /some/absolute/path

one teeny tiny issue I have with downloading a 2GB and 6GB extracted bunch-o-files though: Our devs regularly purge the content of ~/.conan to recover from various conan issues, which would lead to a lot of bandwith consumption on their end (some have really slow downlinks). Same for CI, which would need some proper caching for such big files (currently using a shared conan_download_cache). So I am not 100% convinced about CUDA being useful as a conan package :-/

Can runfile be purely used in non-interactive way? Otherwise how can cudatoolkit perform as a conan package? Never mind, I find it: https://docs.nvidia.com/cuda/cuda-installation-guide-linux/index.html#runfile-advanced

Tumb1eweed commented 2 years ago

image I draw this dependency graph in conda, could conan operate like this?

nenomius commented 2 years ago

Can we have cuda/system in CCI at least and then you can think on how to package it for couple more years?

jgsogo commented 2 years ago

A cuda/system-like recipe could be the way to go. It can behave as a proxy instead of installing anything, it can check that the proper CUDA version is available and move on, if it is not available it can raise some meaningful message telling the user to install it, maybe providing a link to the install instructions.

Tumb1eweed commented 2 years ago

A cuda/system-like recipe could be the way to go. It can behave as a proxy instead of installing anything, it can check that the proper CUDA version is available and move on, if it is not available it can raise some meaningful message telling the user to install it, maybe providing a link to the install instructions.

Here is my cuda recipe, not including driver component, just toolkit, it works fine in building projects like libtorch, tensorrt:

from conans import ConanFile, CMake, tools
import os

class CudaConan(ConanFile):
    name = "cuda"
    license = "NVIDIA EULA"
    description = "CUDA runtime libraries and header files"
    url = "https://developer.nvidia.com/cuda-downloads"
    settings = "os", "arch"
    no_copy_source = True
    options = {"shared": [True, False], "fPIC": [True, False]}
    default_options = {"shared": False, "fPIC": True}

    def source(self):
        runfile_path = os.path.join(self.source_folder, "install.run")
        tools.download(
            self.conan_data["sources"][self.version]["url"], filename=runfile_path
        )
        self.run(
            "sh {args} --extract={tmp}".format(
                args=runfile_path, tmp=self.source_folder
            )
        )

    def package(self):
        self.copy("*", dst="bin", src=self.source_folder + "/cuda_nvcc/bin")
        self.copy("*", dst="bin", src=self.source_folder + "/cuda_nvcc/nvvm/bin")
        self.copy("*", dst="include", src=self.source_folder + "/cuda_cudart/include")
        self.copy("*", dst="include", src=self.source_folder + "/cuda_nvcc/include")
        self.copy("*", dst="include", src=self.source_folder + "/cuda_nvtx/include")
        self.copy("*", dst="include", src=self.source_folder + "/cuda_nvrtc/include")
        self.copy("*", dst="include", src=self.source_folder + "/cuda_thrust/include")
        self.copy("*", dst="include", src=self.source_folder + "/libcublas/include")
        self.copy("*", dst="include", src=self.source_folder + "/libnvjpeg/include")
        self.copy("*", dst="include", src=self.source_folder + "/libcufft/include")
        self.copy("*", dst="include", src=self.source_folder + "/libcurand/include")
        self.copy("*", dst="include", src=self.source_folder + "/libcusolver/include")
        self.copy("*", dst="include", src=self.source_folder + "/libcusparse/include")
        self.copy("*", dst="include", src=self.source_folder + "/libnpp/include")
        self.copy("*", dst="lib64", src=self.source_folder + "/cuda_cudart/lib64")
        self.copy("*", dst="lib64", src=self.source_folder + "/cuda_nvcc/lib64")
        self.copy("*", dst="lib64", src=self.source_folder + "/cuda_nvrtc/lib64")
        self.copy("*", dst="lib64", src=self.source_folder + "/cuda_nvtx/lib64")
        self.copy("*", dst="lib64", src=self.source_folder + "/libcublas/lib64")
        self.copy("*", dst="lib64", src=self.source_folder + "/libnvjpeg/lib64")
        self.copy("*", dst="lib64", src=self.source_folder + "/libcufft/lib64")
        self.copy("*", dst="lib64", src=self.source_folder + "/libcurand/lib64")
        self.copy("*", dst="lib64", src=self.source_folder + "/libcusolver/lib64")
        self.copy("*", dst="lib64", src=self.source_folder + "/libcusparse/lib64")
        self.copy("*", dst="lib64", src=self.source_folder + "/libnpp/lib64")

    def package_info(self):
        self.cpp_info.libdirs = ['lib64']
        self.cpp_info.libs = tools.collect_libs(self)
        self.env_info.PATH.append(os.path.join(self.package_folder, "bin"))
jgsogo commented 2 years ago

We have a policy to not package binaries that haven´t been built on our servers... it basically blocks this kind of recipe.

We need something like the following:


class Recipe:

   @property
   def package_path(self):
       try:
           self.run('some CUDA command to check it is installed')
           # Maybe check output to validate version?               
           # Depending on 'self.settings' the libraries to use will be different
           self.output.info("CUDA found. Installation path: ...")
       except:
           self.output.error("CUDA is not installed on your system. Follow instructions in this link: ....")

   def validate(self):
        self.output.info("CUDA found. Installation path: {}".format(self.package_path))

   def package_info:
        pkg_path = self.package_path
        self.output.info("CUDA found. Installation path: {}".format(pkg_path))

        # Populate 'cpp_info' with the proper information

We might consider an option to run the download and installation script (instead of raising in the validate). Then the package_path method can behave differently depending on that option. But download+install cannot be activated by default in CCI

Tumb1eweed commented 2 years ago

We have a policy to not package binaries that haven´t been built on our servers... it basically blocks this kind of recipe.

We need something like the following:

class Recipe:

   @property
   def package_path(self):
       try:
           self.run('some CUDA command to check it is installed')
           # Maybe check output to validate version?               
           # Depending on 'self.settings' the libraries to use will be different
           self.output.info("CUDA found. Installation path: ...")
       except:
           self.output.error("CUDA is not installed on your system. Follow instructions in this link: ....")

   def validate(self):
        self.output.info("CUDA found. Installation path: {}".format(self.package_path))

   def package_info:
        pkg_path = self.package_path
        self.output.info("CUDA found. Installation path: {}".format(pkg_path))

        # Populate 'cpp_info' with the proper information

We might consider an option to run the download and installation script (instead of raising in the validate). Then the package_path method can behave differently depending on that option. But download+install cannot be activated by default in CCI

oh, so it is like find_package()

jgsogo commented 2 years ago

Yes, something like that. Basically like other ****/system recipes that are already in CCI. This would be different in the sense that the installation/build part won't use a system package manager, but regular download (and it could be packaged).

Tumb1eweed commented 2 years ago

Yes, something like that. Basically like other ****/system recipes that are already in CCI. This would be different in the sense that the installation/build part won't use a system package manager, but regular download (and it could be packaged).

Are there some examples of system recipes we can learn?

jgsogo commented 2 years ago

You can search version = "system" or recipes using def system_requirements(self):, all those are typically system recipes.

SpaceIm commented 2 years ago

Why a system version? We can write a similar recipe but with proper versioning.

jgsogo commented 2 years ago

Yes, I was mentioning system recipes because they don't package anything and they just check something that is available and populate cpp_info. But here, this recipe should have versions and check if the headers/libraries found match that version (or download the proper ones if requested).

SpaceIm commented 2 years ago

Why a system version? We can write a similar recipe but with proper versioning.

Tumb1eweed commented 2 years ago

Yes, I was mentioning system recipes because they don't package anything and they just check something that is available and populate cpp_info. But here, this recipe should have versions and check if the headers/libraries found match that version (or download the proper ones if requested).

In that case, system recipe has to assume that user install cuda on a system level, it can't support custom installation, am I right?

unitedtimur commented 5 months ago

Any updates for cuda?