Closed cornstalkhands closed 1 year ago
Mhm, can you post the output of clinfo
here?
Sure thing!
Number of platforms 1
Platform Name NVIDIA CUDA
Platform Vendor NVIDIA Corporation
Platform Version OpenCL 3.0 CUDA 11.8.87
Platform Profile FULL_PROFILE
Platform Extensions cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_khr_fp64 cl_khr_3d_image_writes cl_khr_byte_addressable_store cl_khr_icd cl_khr_gl_sharing cl_nv_compiler_options cl_nv_device_attribute_query cl_nv_pragma_unroll cl_nv_copy_opts cl_nv_create_buffer cl_khr_int64_base_atomics cl_khr_int64_extended_atomics cl_khr_device_uuid cl_khr_pci_bus_info cl_khr_external_semaphore cl_khr_external_memory cl_khr_external_semaphore_opaque_fd cl_khr_external_memory_opaque_fd
Platform Host timer resolution 0ns
Platform Extensions function suffix NV
Platform Name NVIDIA CUDA
Number of devices 1
Device Name NVIDIA GeForce RTX 3050 Ti Laptop GPU
Device Vendor NVIDIA Corporation
Device Vendor ID 0x10de
Device Version OpenCL 3.0 CUDA
Driver Version 520.56.06
Device OpenCL C Version OpenCL C 1.2
Device Type GPU
Device Topology (NV) PCI-E, 01:00.0
Device Profile FULL_PROFILE
Device Available Yes
Compiler Available Yes
Linker Available Yes
Max compute units 20
Max clock frequency 1035MHz
Compute Capability (NV) 8.6
Device Partition (core)
Max number of sub-devices 1
Supported partition types None
Supported affinity domains (n/a)
Max work item dimensions 3
Max work item sizes 1024x1024x64
Max work group size 1024
=== CL_PROGRAM_BUILD_LOG ===
Preferred work group size multiple <getWGsizes:1200: create kernel : error -45>
Warp size (NV) 32
Max sub-groups per work group 0
Preferred / native vector sizes
char 1 / 1
short 1 / 1
int 1 / 1
long 1 / 1
half 0 / 0 (n/a)
float 1 / 1
double 1 / 1 (cl_khr_fp64)
Half-precision Floating-point support (n/a)
Single-precision Floating-point support (core)
Denormals Yes
Infinity and NANs Yes
Round to nearest Yes
Round to zero Yes
Round to infinity Yes
IEEE754-2008 fused multiply-add Yes
Support is emulated in software No
Correctly-rounded divide and sqrt operations Yes
Double-precision Floating-point support (cl_khr_fp64)
Denormals Yes
Infinity and NANs Yes
Round to nearest Yes
Round to zero Yes
Round to infinity Yes
IEEE754-2008 fused multiply-add Yes
Support is emulated in software No
Address bits 64, Little-Endian
Global memory size 4103208960 (3.821GiB)
Error Correction support No
Max memory allocation 1025802240 (978.3MiB)
Unified memory for Host and Device No
Integrated memory (NV) No
Shared Virtual Memory (SVM) capabilities (core)
Coarse-grained buffer sharing Yes
Fine-grained buffer sharing No
Fine-grained system sharing No
Atomics No
Minimum alignment for any data type 128 bytes
Alignment of base address 4096 bits (512 bytes)
Preferred alignment for atomics
SVM 0 bytes
Global 0 bytes
Local 0 bytes
Max size for global variable 0
Preferred total size of global vars 0
Global Memory cache type Read/Write
Global Memory cache size 573440 (560KiB)
Global Memory cache line size 128 bytes
Image support Yes
Max number of samplers per kernel 32
Max size for 1D images from buffer 268435456 pixels
Max 1D or 2D image array size 2048 images
Max 2D image size 32768x32768 pixels
Max 3D image size 16384x16384x16384 pixels
Max number of read image args 256
Max number of write image args 32
Max number of read/write image args 0
Max number of pipe args 0
Max active pipe reservations 0
Max pipe packet size 0
Local memory type Local
Local memory size 49152 (48KiB)
Registers per block (NV) 65536
Max number of constant args 9
Max constant buffer size 65536 (64KiB)
Max size of kernel argument 4352 (4.25KiB)
Queue properties (on host)
Out-of-order execution Yes
Profiling Yes
Queue properties (on device)
Out-of-order execution No
Profiling No
Preferred size 0
Max size 0
Max queues on device 0
Max events on device 0
Prefer user sync for interop No
Profiling timer resolution 1000ns
Execution capabilities
Run OpenCL kernels Yes
Run native kernels No
Sub-group independent forward progress No
Kernel execution timeout (NV) Yes
Concurrent copy and kernel execution (NV) Yes
Number of async copy engines 2
IL version (n/a)
printf() buffer size 1048576 (1024KiB)
Built-in kernels (n/a)
Device Extensions cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_khr_fp64 cl_khr_3d_image_writes cl_khr_byte_addressable_store cl_khr_icd cl_khr_gl_sharing cl_nv_compiler_options cl_nv_device_attribute_query cl_nv_pragma_unroll cl_nv_copy_opts cl_nv_create_buffer cl_khr_int64_base_atomics cl_khr_int64_extended_atomics cl_khr_device_uuid cl_khr_pci_bus_info cl_khr_external_semaphore cl_khr_external_memory cl_khr_external_semaphore_opaque_fd cl_khr_external_memory_opaque_fd
NULL platform behavior
clGetPlatformInfo(NULL, CL_PLATFORM_NAME, ...) No platform
clGetDeviceIDs(NULL, CL_DEVICE_TYPE_ALL, ...) No platform
clCreateContext(NULL, ...) [default] No platform
clCreateContext(NULL, ...) [other] Success [NV]
clCreateContextFromType(NULL, CL_DEVICE_TYPE_DEFAULT) No platform
clCreateContextFromType(NULL, CL_DEVICE_TYPE_CPU) No devices found in platform
clCreateContextFromType(NULL, CL_DEVICE_TYPE_GPU) No platform
clCreateContextFromType(NULL, CL_DEVICE_TYPE_ACCELERATOR) No devices found in platform
clCreateContextFromType(NULL, CL_DEVICE_TYPE_CUSTOM) Invalid device type for platform
clCreateContextFromType(NULL, CL_DEVICE_TYPE_ALL) No platform
NOTE: your OpenCL library only supports OpenCL 2.0,
but some installed platforms support OpenCL 3.0.
Programs using 3.0 features may crash
or behave unexpectedly
Did not figure out the original issue using a tensorflow/gpu
base but I was able to get it to work with the following dockerfile in case anyone else has similar issues:
FROM nvidia/cuda:11.8.0-cudnn8-runtime-ubuntu22.04
ENV NVIDIA_VISIBLE_DEVICES all
ENV NVIDIA_DRIVER_CAPABILITIES all
ARG NVIDIA_DRIVER_VERSION=515
RUN apt-get update && apt-get install -y --no-install-recommends \
clinfo \
libnvidia-compute-${NVIDIA_DRIVER_VERSION} \
git \
wget \
ca-certificates \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
ENV PATH="/root/miniconda3/bin:${PATH}"
ARG PATH="/root/miniconda3/bin:${PATH}"
ARG C_ENV="csb"
RUN wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh \
&& mkdir /root/.conda \
&& bash Miniconda3-latest-Linux-x86_64.sh -b \
&& rm -f Miniconda3-latest-Linux-x86_64.sh \
&& echo "Running $(conda --version)" && \
conda init bash && \
. /root/.bashrc && \
conda update conda && \
conda create -n ${C_ENV} python=3.8.13 && \
conda activate ${C_ENV} && \
conda install pip && \
conda install -c conda-forge cudatoolkit=11.2 cudnn=8.1.0
RUN conda init bash && \
. /root/.bashrc && \
conda activate ${C_ENV} && \
conda install -c conda-forge scikit-image scikit-learn \
matplotlib libstdcxx-ng && \
pip install tensorflow==2.10.1 stardist gputools edt csbdeep
RUN mkdir -p /miniconda3/envs/${C_ENV}/etc/OpenCL/vendors && \
cp /etc/OpenCL/vendors/* miniconda3/envs/${C_ENV}/etc/OpenCL/vendors
RUN echo "conda activate ${C_ENV}" >> ~/.bashrc
ENTRYPOINT /bin/bash
For me libstdcxx-ng
was extremely important to solve the ImportError: /lib64/libstdc++.so.6: version GLIBCXX_3.4.21 not found
error. And of course copying the .icd so it can be visible.
EDIT: Todays (11/18/22) tensorflow update broke this container(for me). I have edited the Dockerfile to use the latest working version (again, for me) of tensorflow
Great, thanks for looking into this!
Trying to import gputools in a docker container and receiving the following error:
Dockerfile:
Any idea why the build is failing?