Closed gfardell closed 4 days ago
Heya,
Admittedly, I don't know why this was changed. It always worked for me. I wonder if it for the cases where users install CUDA but not add it to PATH?
Is there maybe an alternative we can add that joins two things? something like if built_via_conda: elif hasattr(os, "add_dll_directory"):
that we can do, such that everyone is happy? All this is not much my strength admitedly.
Hi, I'm running into this issue with distributable builds of WebCT, the embedded cudatoolkit isn't being correctly used by TIGRE, resulting in a crash on startup on systems that don't previously have the CUDA libraries installed
2024-11-21 11:16:33,520 [INFO] root: Welcome to WebCT 0.1.3
Traceback (most recent call last):
File "app.py", line 24, in <module>
File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
File "PyInstaller\loader\pyimod02_importers.py", line 419, in exec_module
File "webct\__init__.py", line 223, in <module>
File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
File "PyInstaller\loader\pyimod02_importers.py", line 419, in exec_module
File "webct\blueprints\app\__init__.py", line 7, in <module>
File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
File "PyInstaller\loader\pyimod02_importers.py", line 419, in exec_module
File "webct\blueprints\app\routes.py", line 9, in <module>
File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
File "PyInstaller\loader\pyimod02_importers.py", line 419, in exec_module
File "tigre\__init__.py", line 18, in <module>
File "os.py", line 680, in __getitem__
KeyError: 'CUDA_PATH'
This is an issue since although cudart
does exist in the distributed package, unless the user has also installed the CUDA SDK, they are unable to use tigre.
@gfardell @WYVERN2742 removed the code suggested, have a test if this fixes the issue. I'll reopen it if it doesn't
Thanks for the fast response! I'll test when I get more time later 👍
As you know, for use with CIL we build tigre with conda and host under the ccpi channel
The environment we build in to contains the right version of the cudart redistributable shared library, and this should be automatically found and linked when running within the virtual environment.
However we have an issue where users need to install the cuda sdk installed in order to run tigre.
Commenting out these lines: https://github.com/CERN/TIGRE/blob/729f146316c0d214b2f00cfbdc1111490032e35e/Python/tigre/__init__.py#L9-L19 The correct version of cudart is found automatically. With conda it's in somewhere like
C:\Users\[USER]\miniforge3\envs\[ENV]\Library\bin
With the lines it forces it to use the system installation - which if you're building and running on the same system isn't an issue but obviously our aim is easy redistribution and
CUDA_HOME
is therefore oftenNone
. Even if it's notNone
it may not point to the right CUDA version.I've read through the linked issues to the code, and still struggle to see why it would be necessary if PATH was set correctly to something like:
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8\bin
which containscudart
.Specifications