Open JethroChow opened 4 months ago
The solution is to downgrade the torch version to v2.1.2.
In Google Colab's T4, one is able to use diffusers
with PyTorch 2.3. Could you be sure that PyTorch 2.3 and its CUDA parts are installed properly in its environment? Also, installation in a fresh new environment can be tried. Lastly, could I ask your GPU's model name and its vRAM?
(base) tides@VM-112-2-ubuntu:~/StableDiffusionNotebooks$ nvidia-smi
Fri May 17 13:54:15 2024
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 525.105.17 Driver Version: 525.105.17 CUDA Version: 12.0 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 NVIDIA A10 On | 00000000:0B:01.0 Off | 0 |
| 0% 30C P8 14W / 150W | 0MiB / 23028MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 1 NVIDIA A10 On | 00000000:0B:02.0 Off | 0 |
| 0% 31C P8 14W / 150W | 2MiB / 23028MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 2 NVIDIA A10 On | 00000000:0B:03.0 Off | 0 |
| 0% 30C P8 14W / 150W | 0MiB / 23028MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 3 NVIDIA A10 On | 00000000:0B:04.0 Off | 0 |
| 0% 29C P8 13W / 150W | 0MiB / 23028MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
You also tried installing the torch in a fresh new environment, right?
yes
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the contributing guidelines are likely to be ignored.
Describe the bug
When running the StableDiffusionXLPipeline with a specific model file (Clay_SDXL.safetensors), inference works correctly on CPU but results in a segmentation fault when run on GPU. Below are the specific steps and configurations that lead to this issue.
Reproduction
import torch from diffusers import StableDiffusionXLPipeline
pipeline = StableDiffusionXLPipeline.from_single_file('/data1/tides/sd/SDXL.safetensors', torch_dtype=torch.float16) pipeline.to('cuda:2')
result_image = pipeline(prompt='girl')
Logs
System Info
Python Version: 3.9.19 PyTorch Version: 2.3.0+cu118 CUDA Version: 11.8 Diffusers Version: 0.27.2 Operating System: Ubuntu
Who can help?
No response