-
### Your current environment
The output of `python collect_env.py`
```text
$ python collect_env.py
Collecting environment information...
PyTorch version: 2.4.0+cu121
Is debug build: False
…
-
**Describe the bug**
1. GPU device **not** found in running RapidsAI Docker container in WSL
2. `nvidia-smi` **can** see the device
2.1. in Windows
2.2. in WSL
2.3. from within the runnin…
-
### Check for existing issues
- [X] Completed
### Describe the bug / provide steps to reproduce it
Whatever I write in, if i try to click on the run button, it will run but it doesn't show any outp…
FooqX updated
4 hours ago
-
### Your current environment
The output of `python collect_env.py`
```text
Collecting environment information...
PyTorch version: 2.4.0+cu121
Is debug build: False
CUDA used to build PyTorch…
-
Traceback (most recent call last):
File "E:\ComfyUI_3D\python_embeded\Lib\site-packages\torch\utils\cpp_extension.py", line 2105, in _run_ninja_build
subprocess.run(
File "subprocess.py", l…
-
### System Info
- `transformers` version: 4.45.2
- Platform: Linux-5.4.0-132-generic-x86_64-with-glibc2.31
- Python version: 3.12.7
- Huggingface_hub version: 0.25.1
- Safetensors version: 0.4.…
-
I need to install Nvidia-container-toolkit offline in centos7 server,due to some force。The serve had installed docker 20。 How could I got all the dependencies and install the nvidia-container-toolkit …
-
### Your current environment
The output of `python collect_env.py`
```text
Collecting environment information...
PyTorch version: 2.4.0+cu121
Is debug build: False
CUDA used to build PyTor…
-
Freshly installed FaH client (latest standard release) initially finds and successfully runs WUs for the 4070 Super, then hours later after 1-3 WUs loses the OpenCL, CUDA, or both info and claims plat…
-
I have this error Error processing prompt (see logs with DEBUG>=2): module load failed with status code 222: CUDA_ERROR_UNSUPPORTED_PTX_VERSION : trying Llama 3.1 70b on a server with Cuda 12.6 / Nvi…