DDMAL / Rodan

:dragon_face: A web-based workflow engine.
https://rodan2.simssa.ca/
47 stars 13 forks source link

current prod CUDA version too new for our GPU container #1227

Open homework36 opened 1 week ago

homework36 commented 1 week ago

Our codes run on TensorFlow 2.5.1, which does not support the latest CUDA version, 12.4 (I think Arbutus upgraded it from 11.4 to 12.4). As a result, current production cannot use GPU. According to the official guide, there is now a mismatch in package versions.

Screenshot 2024-11-15 at 9 36 50 PM

We need to spend a decent amount of time updating relevant codes to support the latest GPU, and Python and other packages also need to be updated, which will be a huge pain. There are ongoing online discussions regarding failures to use TensorFlow (many versions) with CUDA 12.4. It looks like we cannot do much about this. We have to blame Arbutus (a lot, as always).

homework36 commented 1 week ago

Also, it seems that every time Arbutus updates the GPU driver, we lose the GPU driver and have to reinstall and reconfigure everything. I don't want to say much, but I don't think any of us like this.

homework36 commented 5 days ago

We had CUDA 11.4 before for prod and it was working. But I'm pretty sure we have also upgraded to 12.x and was able to run with GPU.

homework36 commented 4 days ago
  1. Test latest tensorflow with docker image:

    docker run --gpus all -it tensorflow/tensorflow:latest-gpu bash

    Inside the container:

    import tensorflow as tf
    print(f"TensorFlow version: {tf.__version__}")
    # Check if a GPU is available
    tf.config.list_physical_devices('GPU')

    An empty list was returned, and the same CUDA error message appeared:

    2024-11-19 17:15:57.152690: E external/local_xla/xla/stream_executor/cuda/cuda_driver.cc:152] failed call to cuInit: INTERNAL: CUDA error: Failed call to cuInit: CUDA_ERROR_UNKNOWN: unknown error

    When importing TensorFlow, the following messages came up:

    2024-11-19 17:15:23.690475: I tensorflow/core/util/port.cc:153] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
    2024-11-19 17:15:23.715532: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:477] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
    WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
    E0000 00:00:1732036523.737948      12 cuda_dnn.cc:8310] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
    E0000 00:00:1732036523.744927      12 cuda_blas.cc:1418] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
    2024-11-19 17:15:23.772129: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
    To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.

    I was able to call nvidia-smi successfully in this container and the TensorFlow version is 2.18.0.

  2. Test NVIDIA container toolkit installation (v1.17.2):

    sudo docker run --rm --runtime=nvidia --gpus all ubuntu nvidia-smi

    Failed using the default CUDA toolkit. We need further investigation. So, this might not be a compatibility issue only. There must be something wrong with the hardware, too.