Closed dsienra closed 1 year ago
Try installing cuda 11.8
https://developer.nvidia.com/cuda-11-8-0-download-archive
Make sure to start a fresh command window after that, activate, then try again.
Try installing cuda 11.8
I was encountering the same issue and this resolved it for me. Thanks Victor.
@dsienra is this still an issue? Did you try installing cuda 11.8 from the link?
Pretty sure this will correct the error. Going to close. If this is still a problem please open a new issue.
grad_accum: 1 batch_size: 1 epoch_len: 176 Epochs: 0%| | 0/200 [00:00<?, ?it/s, vram=5509/12288 MB gs:0]C:\EveryDream2trainer\venv\lib\site-packages\xformers\ops\fmha\flash.py:339: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage() and inp.query.storage().data_ptr() == inp.key.storage().data_ptr() Something went wrong, attempting to save model No model to save, something likely blew up on startup, not saving Traceback (most recent call last): File "C:\EveryDream2trainer\train.py", line 916, in
main(args)
File "C:\EveryDream2trainer\train.py", line 844, in main
raise ex
File "C:\EveryDream2trainer\train.py", line 756, in main
ed_optimizer.step(loss, step, global_step)
File "C:\EveryDream2trainer\optimizer\optimizers.py", line 98, in step
self.scaler.step(optimizer)
File "C:\EveryDream2trainer\venv\lib\site-packages\torch\cuda\amp\grad_scaler.py", line 374, in step
retval = self._maybe_opt_step(optimizer, optimizer_state, *args, kwargs)
File "C:\EveryDream2trainer\venv\lib\site-packages\torch\cuda\amp\grad_scaler.py", line 290, in _maybe_opt_step
retval = optimizer.step(*args, *kwargs)
File "C:\EveryDream2trainer\venv\lib\site-packages\torch\optim\lr_scheduler.py", line 69, in wrapper
return wrapped(args, kwargs)
File "C:\EveryDream2trainer\venv\lib\site-packages\torch\optim\optimizer.py", line 280, in wrapper
out = func(*args, kwargs)
File "C:\EveryDream2trainer\venv\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(*args, *kwargs)
File "C:\EveryDream2trainer\venv\lib\site-packages\bitsandbytes\optim\optimizer.py", line 263, in step
self.update_step(group, p, gindex, pindex)
File "C:\EveryDream2trainer\venv\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(args, kwargs)
File "C:\EveryDream2trainer\venv\lib\site-packages\bitsandbytes\optim\optimizer.py", line 504, in update_step
F.optimizer_update_8bit_blockwise(
File "C:\EveryDream2trainer\venv\lib\site-packages\bitsandbytes\functional.py", line 981, in optimizer_update_8bit_blockwise
str2optimizer8bit_blockwise[optimizer_name][0](
NameError: name 'str2optimizer8bit_blockwise' is not defined
Epochs: 0%| | 0/200 [00:11<?, ?it/s, vram=5509/12288 MB gs:0]
(venv) C:\EveryDream2trainer>