Closed HallettVisual closed 1 week ago
What happened?
I just received this error on update on 11-06-24
What did you expect would happen?
It to load without any errors
Relevant log output
The installed version of bitsandbytes was compiled without GPU support. 8-bit optimizers, 8-bit multiplication, and GPU quantization are unavailable. WARNING[XFORMERS]: xFormers can't load C++/CUDA extensions. xFormers was built for: PyTorch 2.3.1+cu118 with CUDA 1108 (you have 2.5.1+cpu) Python 3.10.11 (you have 3.10.11) Please reinstall xformers (see https://github.com/facebookresearch/xformers#installing-xformers) Memory-efficient attention, SwiGLU, sparse and more won't be available. Set XFORMERS_MORE_DETAILS=1 for more details D:\SD MATRIX\Data\Packages\OneTrainer\venv\lib\site-packages\xformers\ops\fmha\flash.py:211: FutureWarning: `torch.library.impl_abstract` was renamed to `torch.library.register_fake`. Please use that instead; we will remove `torch.library.impl_abstract` in a future version of PyTorch. @torch.library.impl_abstract("xformers_flash::flash_fwd") D:\SD MATRIX\Data\Packages\OneTrainer\venv\lib\site-packages\xformers\ops\fmha\flash.py:344: FutureWarning: `torch.library.impl_abstract` was renamed to `torch.library.register_fake`. Please use that instead; we will remove `torch.library.impl_abstract` in a future version of PyTorch. @torch.library.impl_abstract("xformers_flash::flash_bwd") D:\SD MATRIX\Data\Packages\OneTrainer\venv\lib\site-packages\xformers\ops\swiglu_op.py:128: FutureWarning: `torch.cuda.amp.custom_fwd(args...)` is deprecated. Please use `torch.amp.custom_fwd(args..., device_type='cuda')` instead. def forward(cls, ctx, x, w1, b1, w2, b2, w3, b3): D:\SD MATRIX\Data\Packages\OneTrainer\venv\lib\site-packages\xformers\ops\swiglu_op.py:149: FutureWarning: `torch.cuda.amp.custom_bwd(args...)` is deprecated. Please use `torch.amp.custom_bwd(args..., device_type='cuda')` instead. def backward(cls, ctx, dx5): Exception in thread Thread-1 (__training_thread_function): Traceback (most recent call last): File "threading.py", line 1016, in _bootstrap_inner File "threading.py", line 953, in run File "D:\SD MATRIX\Data\Packages\OneTrainer\modules\ui\TrainUI.py", line 552, in __training_thread_function ZLUDA.initialize_devices(self.train_config) File "D:\SD MATRIX\Data\Packages\OneTrainer\modules\zluda\ZLUDA.py", line 37, in initialize_devices if not is_zluda(config.train_device) and not is_zluda(config.temp_device): File "D:\SD MATRIX\Data\Packages\OneTrainer\modules\zluda\ZLUDA.py", line 12, in is_zluda return torch.cuda.get_device_name(device).endswith("[ZLUDA]") File "D:\SD MATRIX\Data\Packages\OneTrainer\venv\lib\site-packages\torch\cuda\__init__.py", line 493, in get_device_name return get_device_properties(device).name File "D:\SD MATRIX\Data\Packages\OneTrainer\venv\lib\site-packages\torch\cuda\__init__.py", line 523, in get_device_properties _lazy_init() # will define _get_device_properties File "D:\SD MATRIX\Data\Packages\OneTrainer\venv\lib\site-packages\torch\cuda\__init__.py", line 310, in _lazy_init raise AssertionError("Torch not compiled with CUDA enabled") AssertionError: Torch not compiled with CUDA enabled
Output of
pip freeze
No response
@HallettVisual
Please edit in you pip freeze and export then upload your config.
My update error. Fixed with a reinstall. Ignore this bug.
What happened?
I just received this error on update on 11-06-24
What did you expect would happen?
It to load without any errors
Relevant log output
Output of
pip freeze
No response