Open Moltennn opened 2 years ago
your ldm model should not be using pytorch-lightning to load... try unstalling pytorch-lightning maybe
That didn't work. It just said something like "missing module pytorch-lightning" Anyway i tried to purge the whole container or w/e those are called and reinstalling. Well no success there either. This whole shenanigan was done on wsl ubuntu.
So i decided to install everything on windows. And got this error. Guess my poor old gtx 970 isn't fit for this :D
python sample.py --model_path finetune.pt --batch_size 1 --num_batches 1 --text "a cyberpunk girl with a scifi neuralink device on her head"
Using device: cuda:0
Traceback (most recent call last):
File "sample.py", line 284, in <module>
ldm.to(device)
File "C:\ProgramData\Anaconda3\envs\ldm\lib\site-packages\pytorch_lightning\core\mixins\device_dtype_mixin.py", line 111, in to
return super().to(*args, **kwargs)
File "C:\ProgramData\Anaconda3\envs\ldm\lib\site-packages\torch\nn\modules\module.py", line 927, in to
return self._apply(convert)
File "C:\ProgramData\Anaconda3\envs\ldm\lib\site-packages\torch\nn\modules\module.py", line 579, in _apply
module._apply(fn)
File "C:\ProgramData\Anaconda3\envs\ldm\lib\site-packages\torch\nn\modules\module.py", line 579, in _apply
module._apply(fn)
File "C:\ProgramData\Anaconda3\envs\ldm\lib\site-packages\torch\nn\modules\module.py", line 579, in _apply
module._apply(fn)
[Previous line repeated 3 more times]
File "C:\ProgramData\Anaconda3\envs\ldm\lib\site-packages\torch\nn\modules\module.py", line 602, in _apply
param_applied = fn(param)
File "C:\ProgramData\Anaconda3\envs\ldm\lib\site-packages\torch\nn\modules\module.py", line 925, in convert
return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
RuntimeError: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 4.00 GiB total capacity; 3.47 GiB already allocated; 0 bytes free; 3.55 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
Then i tried with --cpu parameter to see how it'd go..
python sample.py --cpu --model_path finetune.pt --batch_size 1 --num_batches 1 --text "a cyberpunk girl with a scifi neuralink device on her head"
Using device: cpu
Traceback (most recent call last):
File "sample.py", line 522, in <module>
do_run()
File "sample.py", line 307, in do_run
text_emb = bert.encode([args.text]*args.batch_size).to(device).float()
File "C:\Users\Administrator\txt2img\glid-3-xl\encoders\modules.py", line 99, in encode
return self(text)
File "C:\ProgramData\Anaconda3\envs\ldm\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Users\Administrator\txt2img\glid-3-xl\encoders\modules.py", line 94, in forward
z = self.transformer(tokens, return_embeddings=True)
File "C:\ProgramData\Anaconda3\envs\ldm\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "C:\Users\Administrator\txt2img\glid-3-xl\encoders\x_transformer.py", line 609, in forward
x = self.token_emb(x)
File "C:\ProgramData\Anaconda3\envs\ldm\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "C:\ProgramData\Anaconda3\envs\ldm\lib\site-packages\torch\nn\modules\sparse.py", line 158, in forward
return F.embedding(
File "C:\ProgramData\Anaconda3\envs\ldm\lib\site-packages\torch\nn\functional.py", line 2199, in embedding
return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse)
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument index in method wrapper__index_select)
Its not your GPU i have same issue with 3090.
This versions resolved the issue :
pip install torch==1.11.0+cu113 torchvision==0.12.0+cu113 -f https://download.pytorch.org/whl/torch_stable.html
I can't figure why i'm getting this error
Trying to run with CUDA_LAUNCH_BLOCKING enabled
pip freeze