microsoft / LightGBM

A fast, distributed, high performance gradient boosting (GBT, GBDT, GBRT, GBM or MART) framework based on decision tree algorithms, used for ranking, classification and many other machine learning tasks.
https://lightgbm.readthedocs.io/en/latest/
MIT License
16.66k stars 3.83k forks source link

[GPU] [LightGBM] [Fatal] Cannot build GPU program: Build Program Failure #5914

Closed Carloszone closed 1 year ago

Carloszone commented 1 year ago

Description

I followed the LightGBM GPU Tutorial to install the LightGBM GPU version on my service. Everything was fine until I tested it with a simple Python script: I got the error :

[LightGBM] [Fatal] Cannot build GPU program: Build Program Failure terminate called after throwing an instance of 'std::runtime_error' what(): Cannot build GPU program: Build Program Failure Aborted (core dumped)

Reproducible example

sudo apt update && sudo apt upgrade
sudo apt install software-properties-common
sudo add-apt-repository universe
sudo apt install libboost-all-dev
sudo apt-get install cmake
sudo apt install clinfo
sudo apt install ocl-icd-opencl-dev
sudo mkdir -p /etc/OpenCL/vendors/
echo "libnvidia-opencl.so.1" | sudo tee /etc/OpenCL/vendors/nvidia.icd

git clone --recursive https://github.com/microsoft/LightGBM
cd LightGBM
mkdir build
cd build
cmake -DUSE_GPU=1 -DOpenCL_LIBRARY=/usr/local/cuda/targets/x86_64-linux/lib/libOpenCL.so -DOpenCL_INCLUDE_DIR=/usr/local/cuda/include/ ..
make -j$(nproc)
cd ..

sudo apt-get -y install python3-pip
pip3 install setuptools numpy scipy scikit-learn -U
sudo sh ./build-python.sh install --precompile

python3 lgbtest.py

Environment info

OS: ubuntu 20.04 python: 3.8 LightGBM : 3.3.5

nvidia-smi information:

 NVIDIA-SMI/ Driver Version:525.105.17
CUDA Version: 12.0
GPU: Tesla T4

20230608164134

Command(s) you used to install LightGBM

git clone --recursive https://github.com/microsoft/LightGBM
cd LightGBM
mkdir build
cd build
cmake -DUSE_GPU=1 -DOpenCL_LIBRARY=/usr/local/cuda/targets/x86_64-linux/lib/libOpenCL.so -DOpenCL_INCLUDE_DIR=/usr/local/cuda/include/ ..
make -j$(nproc)
cd ..

sudo apt-get -y install python3-pip
pip3 install setuptools numpy scipy scikit-learn -U
sudo sh ./build-python.sh install --precompile
ldconfig -p | grep OpenCL result
    libOpenCL.so.1 (libc6,x86-64) => /usr/local/cuda/targets/x86_64-linux/lib/libOpenCL.so.1
    libOpenCL.so.1 (libc6,x86-64) => /lib/x86_64-linux-gnu/libOpenCL.so.1
    libOpenCL.so (libc6,x86-64) => /usr/local/cuda/targets/x86_64-linux/lib/libOpenCL.so
    libOpenCL.so (libc6,x86-64) => /lib/x86_64-linux-gnu/libOpenCL.so
clinfo result
Number of platforms                               1
  Platform Name                                   NVIDIA CUDA
  Platform Vendor                                 NVIDIA Corporation
  Platform Version                                OpenCL 3.0 CUDA 12.0.151
  Platform Profile                                FULL_PROFILE
  Platform Extensions                             cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_khr_fp64 cl_khr_3d_image_writes cl_khr_byte_addressable_store cl_khr_icd cl_nv_compiler_options cl_nv_device_attribute_query cl_nv_pragma_unroll cl_nv_copy_opts cl_nv_create_buffer cl_khr_int64_base_atomics cl_khr_int64_extended_atomics cl_nv_kernel_attribute cl_khr_device_uuid cl_khr_pci_bus_info cl_khr_external_semaphore cl_khr_external_memory cl_khr_external_semaphore_opaque_fd cl_khr_external_memory_opaque_fd
  Platform Host timer resolution                  0ns
  Platform Extensions function suffix             NV

  Platform Name                                   NVIDIA CUDA
Number of devices                                 1
  Device Name                                     Tesla T4
  Device Vendor                                   NVIDIA Corporation
  Device Vendor ID                                0x10de
  Device Version                                  OpenCL 3.0 CUDA
  Driver Version                                  525.105.17
  Device OpenCL C Version                         OpenCL C 1.2 
  Device Type                                     GPU
  Device Topology (NV)                            PCI-E, 00:00.6
  Device Profile                                  FULL_PROFILE
  Device Available                                Yes
  Compiler Available                              Yes
  Linker Available                                Yes
  Max compute units                               40
  Max clock frequency                             1590MHz
  Compute Capability (NV)                         7.5
  Device Partition                                (core)
    Max number of sub-devices                     1
    Supported partition types                     None
    Supported affinity domains                    (n/a)
  Max work item dimensions                        3
  Max work item sizes                             1024x1024x64
  Max work group size                             1024
=== CL_PROGRAM_BUILD_LOG ===
  Preferred work group size multiple              <getWGsizes:1200: create kernel : error -45>
  Warp size (NV)                                  32
  Max sub-groups per work group                   0
  Preferred / native vector sizes                 
    char                                                 1 / 1       
    short                                                1 / 1       
    int                                                  1 / 1       
    long                                                 1 / 1       
    half                                                 0 / 0        (n/a)
    float                                                1 / 1       
    double                                               1 / 1        (cl_khr_fp64)
  Half-precision Floating-point support           (n/a)
  Single-precision Floating-point support         (core)
    Denormals                                     Yes
    Infinity and NANs                             Yes
    Round to nearest                              Yes
    Round to zero                                 Yes
    Round to infinity                             Yes
    IEEE754-2008 fused multiply-add               Yes
    Support is emulated in software               No
    Correctly-rounded divide and sqrt operations  Yes
  Double-precision Floating-point support         (cl_khr_fp64)
    Denormals                                     Yes
    Infinity and NANs                             Yes
    Round to nearest                              Yes
    Round to zero                                 Yes
    Round to infinity                             Yes
    IEEE754-2008 fused multiply-add               Yes
    Support is emulated in software               No
  Address bits                                    64, Little-Endian
  Global memory size                              16902717440 (15.74GiB)
  Error Correction support                        No
  Max memory allocation                           4225679360 (3.935GiB)
  Unified memory for Host and Device              No
  Integrated memory (NV)                          No
  Shared Virtual Memory (SVM) capabilities        (core)
    Coarse-grained buffer sharing                 Yes
    Fine-grained buffer sharing                   No
    Fine-grained system sharing                   No
    Atomics                                       No
  Minimum alignment for any data type             128 bytes
  Alignment of base address                       4096 bits (512 bytes)
  Preferred alignment for atomics                 
    SVM                                           0 bytes
    Global                                        0 bytes
    Local                                         0 bytes
  Max size for global variable                    0
  Preferred total size of global vars             0
  Global Memory cache type                        Read/Write
  Global Memory cache size                        1310720 (1.25MiB)
  Global Memory cache line size                   128 bytes
  Image support                                   Yes
    Max number of samplers per kernel             32
    Max size for 1D images from buffer            268435456 pixels
    Max 1D or 2D image array size                 2048 images
    Max 2D image size                             32768x32768 pixels
    Max 3D image size                             16384x16384x16384 pixels
    Max number of read image args                 256
    Max number of write image args                32
    Max number of read/write image args           0
  Max number of pipe args                         0
  Max active pipe reservations                    0
  Max pipe packet size                            0
  Local memory type                               Local
  Local memory size                               49152 (48KiB)
  Registers per block (NV)                        65536
  Max number of constant args                     9
  Max constant buffer size                        65536 (64KiB)
  Max size of kernel argument                     4352 (4.25KiB)
  Queue properties (on host)                      
    Out-of-order execution                        Yes
    Profiling                                     Yes
  Queue properties (on device)                    
    Out-of-order execution                        No
    Profiling                                     No
    Preferred size                                0
    Max size                                      0
  Max queues on device                            0
  Max events on device                            0
  Prefer user sync for interop                    No
  Profiling timer resolution                      1000ns
  Execution capabilities                          
    Run OpenCL kernels                            Yes
    Run native kernels                            No
    Sub-group independent forward progress        No
    Kernel execution timeout (NV)                 No
  Concurrent copy and kernel execution (NV)       Yes
    Number of async copy engines                  3
    IL version                                    (n/a)
  printf() buffer size                            1048576 (1024KiB)
  Built-in kernels                                (n/a)
  Device Extensions                               cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_khr_fp64 cl_khr_3d_image_writes cl_khr_byte_addressable_store cl_khr_icd cl_nv_compiler_options cl_nv_device_attribute_query cl_nv_pragma_unroll cl_nv_copy_opts cl_nv_create_buffer cl_khr_int64_base_atomics cl_khr_int64_extended_atomics cl_nv_kernel_attribute cl_khr_device_uuid cl_khr_pci_bus_info cl_khr_external_semaphore cl_khr_external_memory cl_khr_external_semaphore_opaque_fd cl_khr_external_memory_opaque_fd

NULL platform behavior
  clGetPlatformInfo(NULL, CL_PLATFORM_NAME, ...)  No platform
  clGetDeviceIDs(NULL, CL_DEVICE_TYPE_ALL, ...)   No platform
  clCreateContext(NULL, ...) [default]            No platform
  clCreateContext(NULL, ...) [other]              Success [NV]
  clCreateContextFromType(NULL, CL_DEVICE_TYPE_DEFAULT)  No platform
  clCreateContextFromType(NULL, CL_DEVICE_TYPE_CPU)  No devices found in platform
  clCreateContextFromType(NULL, CL_DEVICE_TYPE_GPU)  No platform
  clCreateContextFromType(NULL, CL_DEVICE_TYPE_ACCELERATOR)  No devices found in platform
  clCreateContextFromType(NULL, CL_DEVICE_TYPE_CUSTOM)  Invalid device type for platform
  clCreateContextFromType(NULL, CL_DEVICE_TYPE_ALL)  No platform
    NOTE:   your OpenCL library only supports OpenCL 2.2,
        but some installed platforms support OpenCL 3.0.
        Programs using 3.0 features may crash
        or behave unexpectedly

Additional Comments

my lgbtest.py code:

import lightgbm
import numpy as np

def check_gpu_support():
    data = np.random.rand(50, 2)
    label = np.random.randint(2, size=50)
    print(label)
    train_data = lightgbm.Dataset(data, label=label)
    params = {'num_iterations': 1, 'device': 'gpu'}
    try:
        gbm = lightgbm.train(params, train_set=train_data)
        print("GPU True !!!")
    except Exception as e:
        print("GPU False !!!")

if __name__ == '__main__':
    check_gpu_support()

RUN INFO

root@autodl-container-25d911a1fa-f3969ef1:/# python lgbtest.py
[1 1 1 1 0 0 1 1 0 0 1 0 0 1 0 1 0 1 0 1 0 0 1 0 0 0 1 0 0 1 1 1 0 0 1 1 1
 0 0 1 0 1 0 0 1 1 1 1 0 0]
/root/miniconda3/lib/python3.8/site-packages/lightgbm/engine.py:172: UserWarning: Found `num_iterations` in params. Will use it instead of argument
  _log_warning(f"Found `{alias}` in params. Will use it instead of argument")
[LightGBM] [Info] This is the GPU trainer!!
[LightGBM] [Info] Total Bins 36
[LightGBM] [Info] Number of data points in the train set: 50, number of used features: 2
[LightGBM] [Info] Using GPU Device: Tesla T4, Vendor: NVIDIA Corporation
[LightGBM] [Info] Compiling OpenCL Kernel with 64 bins...
Build Options: -D POWER_FEATURE_WORKGROUPS=9 -D USE_CONSTANT_BUF=0 -D USE_DP_FLOAT=0 -D CONST_HESSIAN=1 -cl-mad-enable -cl-no-signed-zeros -cl-fast-relaxed-math
Build Log:

[LightGBM] [Fatal] Cannot build GPU program: Build Program Failure
terminate called after throwing an instance of 'std::runtime_error'
  what():  Cannot build GPU program: Build Program Failure
Aborted (core dumped)
jameslamb commented 1 year ago

Thanks for using LightGBM and for the report. Someone here will get to this when we can. If you do your own additional investigation in the interim, please post what you learn.

Note that I've also reformatted some of your report. You might want to see https://docs.github.com/en/get-started/writing-on-github/getting-started-with-writing-and-formatting-on-github/basic-writing-and-formatting-syntax if you're new to authoring text in GitHub-flavored markdown.

Carloszone commented 1 year ago

Update: I tried to build the lightGBM environment from the GPU version image, but still met the issue. I guess some packages or dependencies conflict caused this issue. Unfortunately, I still have no idea to fix it.

SalmanGafarov commented 1 year ago

i tried this :

!sudo sh -c 'cd /kaggle/working/LightGBM/python-package && python3 setup.cfg install --precompile and get this error:

 File "/kaggle/working/LightGBM/python-package/setup.cfg", line 8
    ignore =
            ^
SyntaxError: invalid syntax

this is from setup.cfg [Flake8] [ignore] section

jmoralez commented 1 year ago

Hey @SalmanGafarov, you need to use ./build-python.sh install --precompile

SalmanGafarov commented 1 year ago

i am getting this error

building lightgbm
Requirement already satisfied: build>=0.10.0 in /usr/local/lib/python3.10/dist-packages (0.10.0)
Requirement already satisfied: tomli>=1.1.0 in /usr/local/lib/python3.10/dist-packages (from build>=0.10.0) (2.0.1)
Requirement already satisfied: packaging>=19.0 in /usr/local/lib/python3.10/dist-packages (from build>=0.10.0) (23.1)
Requirement already satisfied: pyproject_hooks in /usr/local/lib/python3.10/dist-packages (from build>=0.10.0) (1.0.0)
class="ansi-yellow-fg">
cp: cannot stat './python-package': No such file or directory 

for this code !sudo sh /kaggle/working/LightGBM/build-python.sh install --precompile

jameslamb commented 1 year ago

It's expected that you build from the root of the LightGBM repo, as that script uses some relative paths. Looks like you're trying to run it from some other location.

Do this:

cd /kaggle/working/LightGBM
sh build-python.sh install --precompile
SalmanGafarov commented 1 year ago

i tried to install manually for cuda. and i get dependancy errors in kaggle. Apparently multiprocess apache beam and patos requires different kind of dill versions:


cudf 23.6.1 requires cupy-cuda11x>=12.0.0, which is not installed.
cuml 23.6.0 requires cupy-cuda11x>=12.0.0, which is not installed.
dask-cudf 23.6.1 requires cupy-cuda11x>=12.0.0, which is not installed.
apache-beam 2.46.0 requires dill<0.3.2,>=0.3.1.1, but you have dill 0.3.6 which is incompatible.
apache-beam 2.46.0 requires numpy<1.25.0,>=1.14.3, but you have numpy 1.25.1 which is incompatible.
apache-beam 2.46.0 requires pyarrow<10.0.0,>=3.0.0, but you have pyarrow 11.0.0 which is incompatible.
cudf 23.6.1 requires protobuf<4.22,>=4.21.6, but you have protobuf 3.20.3 which is incompatible.
cuml 23.6.0 requires dask==2023.3.2, but you have dask 2023.7.0 which is incompatible.
dask-cuda 23.6.0 requires dask==2023.3.2, but you have dask 2023.7.0 which is incompatible.
dask-cudf 23.6.1 requires dask==2023.3.2, but you have dask 2023.7.0 which is incompatible.
momepy 0.6.0 requires shapely>=2, but you have shapely 1.8.5.post1 which is incompatible.
numba 0.57.1 requires numpy<1.25,>=1.21, but you have numpy 1.25.1 which is incompatible.
opentelemetry-api 1.18.0 requires importlib-metadata~=6.0.0, but you have importlib-metadata 6.7.0 which is incompatible.
pymc3 3.11.5 requires numpy<1.22.2,>=1.15.0, but you have numpy 1.25.1 which is incompatible.
pymc3 3.11.5 requires scipy<1.8.0,>=1.7.3, but you have scipy 1.11.1 which is incompatible.
raft-dask 23.6.2 requires dask==2023.3.2, but you have dask 2023.7.0 which is incompatible.
tensorflow 2.12.0 requires numpy<1.24,>=1.22, but you have numpy 1.25.1 which is incompatible.
ydata-profiling 4.3.1 requires numpy<1.24,>=1.16.0, but you have numpy 1.25.1 which is incompatible.
ydata-profiling 4.3.1 requires scipy<1.11,>=1.4.1, but you have scipy 1.11.1 which is incompatible.```
jameslamb commented 1 year ago

@SalmanGafarov none of those dependency conflicts look related to lightgbm.

SalmanGafarov commented 1 year ago

Actually problem is lightgbm for cuda is outdated. I tried to find versions that depend with each other but eventually it said you need python 3.9 which it's not ok. It has to be updated

jijo7 commented 1 year ago

Hi I would greatly appreciate if you could let me know how to solve this issue (I am using Colab):

Code:

! git clone --recursive https://github.com/Microsoft/LightGBM
! cd LightGBM && rm -rf build && mkdir build && cd build && cmake -DUSE_GPU=1 ../../LightGBM && make -j4 && cd ../python-package && sh ./build-python.sh install --precompile --gpu;

Error:

CMake Deprecation Warning at CMakeLists.txt:35 (cmake_minimum_required):
  Compatibility with CMake < 3.5 will be removed from a future version of
  CMake.

  Update the VERSION argument <min> value or use a ...<max> suffix to tell
  CMake that the project does not need compatibility with older versions.

-- The C compiler identification is GNU 11.4.0
-- The CXX compiler identification is GNU 11.4.0
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: /usr/bin/cc - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: /usr/bin/c++ - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Found OpenMP_C: -fopenmp (found version "4.5") 
-- Found OpenMP_CXX: -fopenmp (found version "4.5") 
-- Found OpenMP: TRUE (found version "4.5")  
-- Looking for CL_VERSION_3_0
-- Looking for CL_VERSION_3_0 - found
-- Found OpenCL: /usr/lib/x86_64-linux-gnu/libOpenCL.so (found version "3.0") 
-- OpenCL include directory: /usr/include
-- Found Boost: /usr/lib/x86_64-linux-gnu/cmake/Boost-1.74.0/BoostConfig.cmake (found suitable version "1.74.0", minimum required is "1.56.0") found components: filesystem system 
-- Performing Test MM_PREFETCH
-- Performing Test MM_PREFETCH - Success
-- Using _mm_prefetch
-- Performing Test MM_MALLOC
-- Performing Test MM_MALLOC - Success
-- Using _mm_malloc
-- Configuring done (0.9s)
-- Generating done (0.0s)
-- Build files have been written to: /content/LightGBM/python-package/LightGBM/build
[  1%] Building CXX object CMakeFiles/lightgbm_capi_objs.dir/src/c_api.cpp.o
[  3%] Building CXX object CMakeFiles/lightgbm_objs.dir/src/boosting/boosting.cpp.o
[  5%] Building CXX object CMakeFiles/lightgbm_objs.dir/src/boosting/cuda/cuda_score_updater.cpp.o
[  7%] Building CXX object CMakeFiles/lightgbm_objs.dir/src/boosting/gbdt.cpp.o
[  9%] Building CXX object CMakeFiles/lightgbm_objs.dir/src/boosting/gbdt_model_text.cpp.o
[ 10%] Building CXX object CMakeFiles/lightgbm_objs.dir/src/boosting/gbdt_prediction.cpp.o
[ 12%] Building CXX object CMakeFiles/lightgbm_objs.dir/src/boosting/prediction_early_stop.cpp.o
[ 14%] Building CXX object CMakeFiles/lightgbm_objs.dir/src/boosting/sample_strategy.cpp.o
[ 16%] Building CXX object CMakeFiles/lightgbm_objs.dir/src/cuda/cuda_utils.cpp.o
[ 18%] Building CXX object CMakeFiles/lightgbm_objs.dir/src/io/bin.cpp.o
[ 20%] Building CXX object CMakeFiles/lightgbm_objs.dir/src/io/config.cpp.o
[ 21%] Building CXX object CMakeFiles/lightgbm_objs.dir/src/io/config_auto.cpp.o
[ 23%] Building CXX object CMakeFiles/lightgbm_objs.dir/src/io/cuda/cuda_column_data.cpp.o
[ 25%] Building CXX object CMakeFiles/lightgbm_objs.dir/src/io/cuda/cuda_metadata.cpp.o
[ 27%] Building CXX object CMakeFiles/lightgbm_objs.dir/src/io/cuda/cuda_row_data.cpp.o
[ 29%] Building CXX object CMakeFiles/lightgbm_objs.dir/src/io/cuda/cuda_tree.cpp.o
[ 30%] Building CXX object CMakeFiles/lightgbm_objs.dir/src/io/dataset.cpp.o
[ 30%] Built target lightgbm_capi_objs
[ 32%] Building CXX object CMakeFiles/lightgbm_objs.dir/src/io/dataset_loader.cpp.o
[ 34%] Building CXX object CMakeFiles/lightgbm_objs.dir/src/io/file_io.cpp.o
[ 36%] Building CXX object CMakeFiles/lightgbm_objs.dir/src/io/json11.cpp.o
[ 38%] Building CXX object CMakeFiles/lightgbm_objs.dir/src/io/metadata.cpp.o
[ 40%] Building CXX object CMakeFiles/lightgbm_objs.dir/src/io/parser.cpp.o
[ 41%] Building CXX object CMakeFiles/lightgbm_objs.dir/src/io/train_share_states.cpp.o
[ 43%] Building CXX object CMakeFiles/lightgbm_objs.dir/src/io/tree.cpp.o
[ 45%] Building CXX object CMakeFiles/lightgbm_objs.dir/src/metric/cuda/cuda_binary_metric.cpp.o
[ 47%] Building CXX object CMakeFiles/lightgbm_objs.dir/src/metric/cuda/cuda_pointwise_metric.cpp.o
[ 49%] Building CXX object CMakeFiles/lightgbm_objs.dir/src/metric/cuda/cuda_regression_metric.cpp.o
[ 50%] Building CXX object CMakeFiles/lightgbm_objs.dir/src/metric/dcg_calculator.cpp.o
[ 52%] Building CXX object CMakeFiles/lightgbm_objs.dir/src/metric/metric.cpp.o
[ 54%] Building CXX object CMakeFiles/lightgbm_objs.dir/src/network/linker_topo.cpp.o
[ 56%] Building CXX object CMakeFiles/lightgbm_objs.dir/src/network/linkers_mpi.cpp.o
[ 58%] Building CXX object CMakeFiles/lightgbm_objs.dir/src/network/linkers_socket.cpp.o
[ 60%] Building CXX object CMakeFiles/lightgbm_objs.dir/src/network/network.cpp.o
[ 61%] Building CXX object CMakeFiles/lightgbm_objs.dir/src/objective/cuda/cuda_binary_objective.cpp.o
[ 63%] Building CXX object CMakeFiles/lightgbm_objs.dir/src/objective/cuda/cuda_multiclass_objective.cpp.o
[ 65%] Building CXX object CMakeFiles/lightgbm_objs.dir/src/objective/cuda/cuda_rank_objective.cpp.o
[ 67%] Building CXX object CMakeFiles/lightgbm_objs.dir/src/objective/cuda/cuda_regression_objective.cpp.o
[ 69%] Building CXX object CMakeFiles/lightgbm_objs.dir/src/objective/objective_function.cpp.o
[ 70%] Building CXX object CMakeFiles/lightgbm_objs.dir/src/treelearner/cuda/cuda_best_split_finder.cpp.o
[ 72%] Building CXX object CMakeFiles/lightgbm_objs.dir/src/treelearner/cuda/cuda_data_partition.cpp.o
[ 74%] Building CXX object CMakeFiles/lightgbm_objs.dir/src/treelearner/cuda/cuda_histogram_constructor.cpp.o
[ 76%] Building CXX object CMakeFiles/lightgbm_objs.dir/src/treelearner/cuda/cuda_leaf_splits.cpp.o
[ 78%] Building CXX object CMakeFiles/lightgbm_objs.dir/src/treelearner/cuda/cuda_single_gpu_tree_learner.cpp.o
[ 80%] Building CXX object CMakeFiles/lightgbm_objs.dir/src/treelearner/data_parallel_tree_learner.cpp.o
[ 81%] Building CXX object CMakeFiles/lightgbm_objs.dir/src/treelearner/feature_parallel_tree_learner.cpp.o
[ 83%] Building CXX object CMakeFiles/lightgbm_objs.dir/src/treelearner/gpu_tree_learner.cpp.o
In file included from /usr/include/CL/cl.h:20,
                 from /content/LightGBM/python-package/LightGBM/external_libs/compute/include/boost/compute/cl.hpp:19,
                 from /content/LightGBM/python-package/LightGBM/external_libs/compute/include/boost/compute/config.hpp:16,
                 from /content/LightGBM/python-package/LightGBM/external_libs/compute/include/boost/compute/buffer.hpp:14,
                 from /content/LightGBM/python-package/LightGBM/external_libs/compute/include/boost/compute/core.hpp:18,
                 from /content/LightGBM/python-package/LightGBM/src/treelearner/gpu_tree_learner.h:33,
                 from /content/LightGBM/python-package/LightGBM/src/treelearner/parallel_tree_learner.h:15,
                 from /content/LightGBM/python-package/LightGBM/src/treelearner/data_parallel_tree_learner.cpp:9:
/usr/include/CL/cl_version.h:22:104: note: ‘#pragma message: cl_version.h: CL_TARGET_OPENCL_VERSION is not defined. Defaulting to 300 (OpenCL 3.0)’
   22 | #pragma message("cl_version.h: CL_TARGET_OPENCL_VERSION is not defined. Defaulting to 300 (OpenCL 3.0)")
      |                                                                                                        ^
In file included from /usr/include/CL/cl.h:20,
                 from /content/LightGBM/python-package/LightGBM/external_libs/compute/include/boost/compute/cl.hpp:19,
                 from /content/LightGBM/python-package/LightGBM/external_libs/compute/include/boost/compute/config.hpp:16,
                 from /content/LightGBM/python-package/LightGBM/external_libs/compute/include/boost/compute/buffer.hpp:14,
                 from /content/LightGBM/python-package/LightGBM/external_libs/compute/include/boost/compute/core.hpp:18,
                 from /content/LightGBM/python-package/LightGBM/src/treelearner/gpu_tree_learner.h:33,
                 from /content/LightGBM/python-package/LightGBM/src/treelearner/parallel_tree_learner.h:15,
                 from /content/LightGBM/python-package/LightGBM/src/treelearner/feature_parallel_tree_learner.cpp:8:
/usr/include/CL/cl_version.h:22:104: note: ‘#pragma message: cl_version.h: CL_TARGET_OPENCL_VERSION is not defined. Defaulting to 300 (OpenCL 3.0)’
   22 | #pragma message("cl_version.h: CL_TARGET_OPENCL_VERSION is not defined. Defaulting to 300 (OpenCL 3.0)")
      |                                                                                                        ^
In file included from /usr/include/CL/cl.h:20,
                 from /content/LightGBM/python-package/LightGBM/external_libs/compute/include/boost/compute/cl.hpp:19,
                 from /content/LightGBM/python-package/LightGBM/external_libs/compute/include/boost/compute/config.hpp:16,
                 from /content/LightGBM/python-package/LightGBM/external_libs/compute/include/boost/compute/buffer.hpp:14,
                 from /content/LightGBM/python-package/LightGBM/external_libs/compute/include/boost/compute/core.hpp:18,
                 from /content/LightGBM/python-package/LightGBM/src/treelearner/gpu_tree_learner.h:33,
                 from /content/LightGBM/python-package/LightGBM/src/treelearner/gpu_tree_learner.cpp:7:
/usr/include/CL/cl_version.h:22:104: note: ‘#pragma message: cl_version.h: CL_TARGET_OPENCL_VERSION is not defined. Defaulting to 300 (OpenCL 3.0)’
   22 | #pragma message("cl_version.h: CL_TARGET_OPENCL_VERSION is not defined. Defaulting to 300 (OpenCL 3.0)")
      |                                                                                                        ^
In file included from /usr/include/boost/smart_ptr/detail/sp_thread_sleep.hpp:22,
                 from /usr/include/boost/smart_ptr/detail/yield_k.hpp:23,
                 from /usr/include/boost/smart_ptr/detail/spinlock_gcc_atomic.hpp:14,
                 from /usr/include/boost/smart_ptr/detail/spinlock.hpp:42,
                 from /usr/include/boost/smart_ptr/detail/spinlock_pool.hpp:25,
                 from /usr/include/boost/smart_ptr/shared_ptr.hpp:29,
                 from /usr/include/boost/shared_ptr.hpp:17,
                 from /content/LightGBM/python-package/LightGBM/external_libs/compute/include/boost/compute/utility/program_cache.hpp:17,
                 from /content/LightGBM/python-package/LightGBM/external_libs/compute/include/boost/compute/detail/meta_kernel.hpp:40,
                 from /content/LightGBM/python-package/LightGBM/external_libs/compute/include/boost/compute/types/complex.hpp:20,
                 from /content/LightGBM/python-package/LightGBM/external_libs/compute/include/boost/compute/types.hpp:18,
                 from /content/LightGBM/python-package/LightGBM/external_libs/compute/include/boost/compute/core.hpp:25,
                 from /content/LightGBM/python-package/LightGBM/src/treelearner/gpu_tree_learner.h:33,
                 from /content/LightGBM/python-package/LightGBM/src/treelearner/parallel_tree_learner.h:15,
                 from /content/LightGBM/python-package/LightGBM/src/treelearner/data_parallel_tree_learner.cpp:9:
/usr/include/boost/bind.hpp:36:1: note: ‘#pragma message: The practice of declaring the Bind placeholders (_1, _2, ...) in the global namespace is deprecated. Please use <boost/bind/bind.hpp> + using namespace boost::placeholders, or define BOOST_BIND_GLOBAL_PLACEHOLDERS to retain the current behavior.’
   36 | BOOST_PRAGMA_MESSAGE(
      | ^~~~~~~~~~~~~~~~~~~~
In file included from /usr/include/boost/smart_ptr/detail/sp_thread_sleep.hpp:22,
                 from /usr/include/boost/smart_ptr/detail/yield_k.hpp:23,
                 from /usr/include/boost/smart_ptr/detail/spinlock_gcc_atomic.hpp:14,
                 from /usr/include/boost/smart_ptr/detail/spinlock.hpp:42,
                 from /usr/include/boost/smart_ptr/detail/spinlock_pool.hpp:25,
                 from /usr/include/boost/smart_ptr/shared_ptr.hpp:29,
                 from /usr/include/boost/shared_ptr.hpp:17,
                 from /content/LightGBM/python-package/LightGBM/external_libs/compute/include/boost/compute/utility/program_cache.hpp:17,
                 from /content/LightGBM/python-package/LightGBM/external_libs/compute/include/boost/compute/detail/meta_kernel.hpp:40,
                 from /content/LightGBM/python-package/LightGBM/external_libs/compute/include/boost/compute/types/complex.hpp:20,
                 from /content/LightGBM/python-package/LightGBM/external_libs/compute/include/boost/compute/types.hpp:18,
                 from /content/LightGBM/python-package/LightGBM/external_libs/compute/include/boost/compute/core.hpp:25,
                 from /content/LightGBM/python-package/LightGBM/src/treelearner/gpu_tree_learner.h:33,
                 from /content/LightGBM/python-package/LightGBM/src/treelearner/parallel_tree_learner.h:15,
                 from /content/LightGBM/python-package/LightGBM/src/treelearner/feature_parallel_tree_learner.cpp:8:
/usr/include/boost/bind.hpp:36:1: note: ‘#pragma message: The practice of declaring the Bind placeholders (_1, _2, ...) in the global namespace is deprecated. Please use <boost/bind/bind.hpp> + using namespace boost::placeholders, or define BOOST_BIND_GLOBAL_PLACEHOLDERS to retain the current behavior.’
   36 | BOOST_PRAGMA_MESSAGE(
      | ^~~~~~~~~~~~~~~~~~~~
In file included from /usr/include/boost/smart_ptr/detail/sp_thread_sleep.hpp:22,
                 from /usr/include/boost/smart_ptr/detail/yield_k.hpp:23,
                 from /usr/include/boost/smart_ptr/detail/spinlock_gcc_atomic.hpp:14,
                 from /usr/include/boost/smart_ptr/detail/spinlock.hpp:42,
                 from /usr/include/boost/smart_ptr/detail/spinlock_pool.hpp:25,
                 from /usr/include/boost/smart_ptr/shared_ptr.hpp:29,
                 from /usr/include/boost/shared_ptr.hpp:17,
                 from /content/LightGBM/python-package/LightGBM/external_libs/compute/include/boost/compute/utility/program_cache.hpp:17,
                 from /content/LightGBM/python-package/LightGBM/external_libs/compute/include/boost/compute/detail/meta_kernel.hpp:40,
                 from /content/LightGBM/python-package/LightGBM/external_libs/compute/include/boost/compute/types/complex.hpp:20,
                 from /content/LightGBM/python-package/LightGBM/external_libs/compute/include/boost/compute/types.hpp:18,
                 from /content/LightGBM/python-package/LightGBM/external_libs/compute/include/boost/compute/core.hpp:25,
                 from /content/LightGBM/python-package/LightGBM/src/treelearner/gpu_tree_learner.h:33,
                 from /content/LightGBM/python-package/LightGBM/src/treelearner/gpu_tree_learner.cpp:7:
/usr/include/boost/bind.hpp:36:1: note: ‘#pragma message: The practice of declaring the Bind placeholders (_1, _2, ...) in the global namespace is deprecated. Please use <boost/bind/bind.hpp> + using namespace boost::placeholders, or define BOOST_BIND_GLOBAL_PLACEHOLDERS to retain the current behavior.’
   36 | BOOST_PRAGMA_MESSAGE(
      | ^~~~~~~~~~~~~~~~~~~~
[ 85%] Building CXX object CMakeFiles/lightgbm_objs.dir/src/treelearner/gradient_discretizer.cpp.o
[ 87%] Building CXX object CMakeFiles/lightgbm_objs.dir/src/treelearner/linear_tree_learner.cpp.o
[ 89%] Building CXX object CMakeFiles/lightgbm_objs.dir/src/treelearner/serial_tree_learner.cpp.o
[ 90%] Building CXX object CMakeFiles/lightgbm_objs.dir/src/treelearner/tree_learner.cpp.o
In file included from /usr/include/CL/cl.h:20,
                 from /content/LightGBM/python-package/LightGBM/external_libs/compute/include/boost/compute/cl.hpp:19,
                 from /content/LightGBM/python-package/LightGBM/external_libs/compute/include/boost/compute/config.hpp:16,
                 from /content/LightGBM/python-package/LightGBM/external_libs/compute/include/boost/compute/buffer.hpp:14,
                 from /content/LightGBM/python-package/LightGBM/external_libs/compute/include/boost/compute/core.hpp:18,
                 from /content/LightGBM/python-package/LightGBM/src/treelearner/gpu_tree_learner.h:33,
                 from /content/LightGBM/python-package/LightGBM/src/treelearner/tree_learner.cpp:7:
/usr/include/CL/cl_version.h:22:104: note: ‘#pragma message: cl_version.h: CL_TARGET_OPENCL_VERSION is not defined. Defaulting to 300 (OpenCL 3.0)’
   22 | #pragma message("cl_version.h: CL_TARGET_OPENCL_VERSION is not defined. Defaulting to 300 (OpenCL 3.0)")
      |                                                                                                        ^
In file included from /usr/include/boost/smart_ptr/detail/sp_thread_sleep.hpp:22,
                 from /usr/include/boost/smart_ptr/detail/yield_k.hpp:23,
                 from /usr/include/boost/smart_ptr/detail/spinlock_gcc_atomic.hpp:14,
                 from /usr/include/boost/smart_ptr/detail/spinlock.hpp:42,
                 from /usr/include/boost/smart_ptr/detail/spinlock_pool.hpp:25,
                 from /usr/include/boost/smart_ptr/shared_ptr.hpp:29,
                 from /usr/include/boost/shared_ptr.hpp:17,
                 from /content/LightGBM/python-package/LightGBM/external_libs/compute/include/boost/compute/utility/program_cache.hpp:17,
                 from /content/LightGBM/python-package/LightGBM/external_libs/compute/include/boost/compute/detail/meta_kernel.hpp:40,
                 from /content/LightGBM/python-package/LightGBM/external_libs/compute/include/boost/compute/types/complex.hpp:20,
                 from /content/LightGBM/python-package/LightGBM/external_libs/compute/include/boost/compute/types.hpp:18,
                 from /content/LightGBM/python-package/LightGBM/external_libs/compute/include/boost/compute/core.hpp:25,
                 from /content/LightGBM/python-package/LightGBM/src/treelearner/gpu_tree_learner.h:33,
                 from /content/LightGBM/python-package/LightGBM/src/treelearner/tree_learner.cpp:7:
/usr/include/boost/bind.hpp:36:1: note: ‘#pragma message: The practice of declaring the Bind placeholders (_1, _2, ...) in the global namespace is deprecated. Please use <boost/bind/bind.hpp> + using namespace boost::placeholders, or define BOOST_BIND_GLOBAL_PLACEHOLDERS to retain the current behavior.’
   36 | BOOST_PRAGMA_MESSAGE(
      | ^~~~~~~~~~~~~~~~~~~~
[ 92%] Building CXX object CMakeFiles/lightgbm_objs.dir/src/treelearner/voting_parallel_tree_learner.cpp.o
In file included from /usr/include/CL/cl.h:20,
                 from /content/LightGBM/python-package/LightGBM/external_libs/compute/include/boost/compute/cl.hpp:19,
                 from /content/LightGBM/python-package/LightGBM/external_libs/compute/include/boost/compute/config.hpp:16,
                 from /content/LightGBM/python-package/LightGBM/external_libs/compute/include/boost/compute/buffer.hpp:14,
                 from /content/LightGBM/python-package/LightGBM/external_libs/compute/include/boost/compute/core.hpp:18,
                 from /content/LightGBM/python-package/LightGBM/src/treelearner/gpu_tree_learner.h:33,
                 from /content/LightGBM/python-package/LightGBM/src/treelearner/parallel_tree_learner.h:15,
                 from /content/LightGBM/python-package/LightGBM/src/treelearner/voting_parallel_tree_learner.cpp:11:
/usr/include/CL/cl_version.h:22:104: note: ‘#pragma message: cl_version.h: CL_TARGET_OPENCL_VERSION is not defined. Defaulting to 300 (OpenCL 3.0)’
   22 | #pragma message("cl_version.h: CL_TARGET_OPENCL_VERSION is not defined. Defaulting to 300 (OpenCL 3.0)")
      |                                                                                                        ^
In file included from /usr/include/boost/smart_ptr/detail/sp_thread_sleep.hpp:22,
                 from /usr/include/boost/smart_ptr/detail/yield_k.hpp:23,
                 from /usr/include/boost/smart_ptr/detail/spinlock_gcc_atomic.hpp:14,
                 from /usr/include/boost/smart_ptr/detail/spinlock.hpp:42,
                 from /usr/include/boost/smart_ptr/detail/spinlock_pool.hpp:25,
                 from /usr/include/boost/smart_ptr/shared_ptr.hpp:29,
                 from /usr/include/boost/shared_ptr.hpp:17,
                 from /content/LightGBM/python-package/LightGBM/external_libs/compute/include/boost/compute/utility/program_cache.hpp:17,
                 from /content/LightGBM/python-package/LightGBM/external_libs/compute/include/boost/compute/detail/meta_kernel.hpp:40,
                 from /content/LightGBM/python-package/LightGBM/external_libs/compute/include/boost/compute/types/complex.hpp:20,
                 from /content/LightGBM/python-package/LightGBM/external_libs/compute/include/boost/compute/types.hpp:18,
                 from /content/LightGBM/python-package/LightGBM/external_libs/compute/include/boost/compute/core.hpp:25,
                 from /content/LightGBM/python-package/LightGBM/src/treelearner/gpu_tree_learner.h:33,
                 from /content/LightGBM/python-package/LightGBM/src/treelearner/parallel_tree_learner.h:15,
                 from /content/LightGBM/python-package/LightGBM/src/treelearner/voting_parallel_tree_learner.cpp:11:
/usr/include/boost/bind.hpp:36:1: note: ‘#pragma message: The practice of declaring the Bind placeholders (_1, _2, ...) in the global namespace is deprecated. Please use <boost/bind/bind.hpp> + using namespace boost::placeholders, or define BOOST_BIND_GLOBAL_PLACEHOLDERS to retain the current behavior.’
   36 | BOOST_PRAGMA_MESSAGE(
      | ^~~~~~~~~~~~~~~~~~~~
[ 92%] Built target lightgbm_objs
[ 94%] Linking CXX shared library /content/LightGBM/python-package/LightGBM/lib_lightgbm.so
[ 96%] Building CXX object CMakeFiles/lightgbm.dir/src/main.cpp.o
[ 98%] Building CXX object CMakeFiles/lightgbm.dir/src/application/application.cpp.o
[ 98%] Built target _lightgbm
[100%] Linking CXX executable /content/LightGBM/python-package/LightGBM/lightgbm
[100%] Built target lightgbm
sh: 0: cannot open ./build-python.sh: No such file

Thanks in advance.

jameslamb commented 1 year ago

As of this writing, lightgbm>=4.0.0 comes already installed on Google Colab machine learning notebook environments. To use lightgbm there:

  1. select Edit -> Notebook Settings and choose T4 GPU (or whatever other NVIDIA GPU is available)
  2. run the following in a notebook cell to ensure LightGBM can utilize the NVIDIA GPU (https://github.com/microsoft/LightGBM/issues/4497#issuecomment-1181435844)
!mkdir -p /etc/OpenCL/vendors && echo "libnvidia-opencl.so.1" > /etc/OpenCL/vendors/nvidia.icd
  1. Pass {"device": "gpu"} to your code to use the GPU-enabled version of LightGBM.
import lightgbm as lgb
from sklearn.datasets import make_regression

X, y = make_regression(n_samples=10_000)
dtrain = lgb.Dataset(X, label=y)
bst = lgb.train(
    params={
        "objective": "regression",
        "device": "gpu",
        "verbose": 1
    },
    train_set=dtrain,
    num_boost_round=5
)

You should see something like the following:

[LightGBM] [Info] This is the GPU trainer!!
[LightGBM] [Info] Total Bins 25500
[LightGBM] [Info] Number of data points in the train set: 10000, number of used features: 100
[LightGBM] [Info] Using GPU Device: Tesla T4, Vendor: NVIDIA Corporation
[LightGBM] [Info] Compiling OpenCL Kernel with 256 bins...
[LightGBM] [Info] GPU programs have been built
[LightGBM] [Info] Size of histogram bin entry: 8
[LightGBM] [Info] 100 dense feature groups (0.95 MB) transferred to GPU in 0.001878 secs. 0 sparse feature groups
[LightGBM] [Info] Start training from score 1.025020

I've also answered this on Stack Overflow: I've answered this on Stack Overflow tonight: https://stackoverflow.com/a/77078844/3986677. If you found this answer useful, please upvote that answer so others finding Stack Overflow from search engines will be able to get past this issue.

github-actions[bot] commented 1 year ago

This issue has been automatically closed because it has been awaiting a response for too long. When you have time to to work with the maintainers to resolve this issue, please post a new comment and it will be re-opened. If the issue has been locked for editing by the time you return to it, please open a new issue and reference this one. Thank you for taking the time to improve LightGBM!

github-actions[bot] commented 2 weeks ago

This issue has been automatically locked since there has not been any recent activity since it was closed. To start a new related discussion, open a new issue at https://github.com/microsoft/LightGBM/issues including a reference to this.