nerdyrodent / VQGAN-CLIP

Just playing with getting VQGAN+CLIP running locally, rather than having to use colab.
Other
2.6k stars 428 forks source link

RuntimeError: cusolver error: CUSOLVER_STATUS_INTERNAL_ERROR, when calling `cusolverDnCreate(handle)` #6

Closed vexersa closed 3 years ago

vexersa commented 3 years ago

Hey!

Thanks for this, I am so ready to create bizarreness.

Hardware: Ryzen 7 3700X 32GB RAM RTX 2070 Super

OS: Windows 10 Pro

I'm getting the below error when running generate.py:

python generate.py -p "Yee"

Output: (vqgan) PS C:\Users\andre\anaconda3\envs\vqgan\VQGAN-CLIP> python generate.py -p "Yee" Working with z of shape (1, 256, 16, 16) = 65536 dimensions. loaded pretrained LPIPS loss from taming/modules/autoencoder/lpips\vgg.pth VQLPIPSWithDiscriminator running with hinge loss. Restored from checkpoints/vqgan_imagenet_f16_16384.ckpt C:\Users\andre\anaconda3\envs\vqgan\lib\site-packages\torchvision\transforms\transforms.py:280: UserWarning: Argument interpolation should be of type InterpolationMode instead of int. Please, use InterpolationMode enum. warnings.warn( Using device: cuda:0 Optimising using: Adam Using text prompts: ['Yee'] Using seed: 329366907029900 0it [00:01, ?it/s] Traceback (most recent call last): File "C:\Users\andre\anaconda3\envs\vqgan\VQGAN-CLIP\generate.py", line 461, in <module> train(i) File "C:\Users\andre\anaconda3\envs\vqgan\VQGAN-CLIP\generate.py", line 444, in train lossAll = ascend_txt() File "C:\Users\andre\anaconda3\envs\vqgan\VQGAN-CLIP\generate.py", line 423, in ascend_txt iii = perceptor.encode_image(normalize(make_cutouts(out))).float() File "C:\Users\andre\anaconda3\envs\vqgan\lib\site-packages\torch\nn\modules\module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File "C:\Users\andre\anaconda3\envs\vqgan\VQGAN-CLIP\generate.py", line 241, in forward batch = self.augs(torch.cat(cutouts, dim=0)) File "C:\Users\andre\anaconda3\envs\vqgan\lib\site-packages\torch\nn\modules\module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File "C:\Users\andre\anaconda3\envs\vqgan\lib\site-packages\torch\nn\modules\container.py", line 139, in forward input = module(input) File "C:\Users\andre\anaconda3\envs\vqgan\lib\site-packages\torch\nn\modules\module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File "C:\Users\andre\anaconda3\envs\vqgan\lib\site-packages\kornia\augmentation\base.py", line 245, in forward output = self.apply_func(in_tensor, in_transform, self._params, return_transform) File "C:\Users\andre\anaconda3\envs\vqgan\lib\site-packages\kornia\augmentation\base.py", line 210, in apply_func output[to_apply] = self.apply_transform(in_tensor[to_apply], params, trans_matrix[to_apply]) File "C:\Users\andre\anaconda3\envs\vqgan\lib\site-packages\kornia\augmentation\augmentation.py", line 684, in apply_transform return warp_affine( File "C:\Users\andre\anaconda3\envs\vqgan\lib\site-packages\kornia\geometry\transform\imgwarp.py", line 192, in warp_affine dst_norm_trans_src_norm: torch.Tensor = normalize_homography(M_3x3, (H, W), dsize) File "C:\Users\andre\anaconda3\envs\vqgan\lib\site-packages\kornia\geometry\transform\homography_warper.py", line 380, in normalize_homography src_pix_trans_src_norm = _torch_inverse_cast(src_norm_trans_src_pix) File "C:\Users\andre\anaconda3\envs\vqgan\lib\site-packages\kornia\utils\helpers.py", line 48, in _torch_inverse_cast return torch.inverse(input.to(dtype)).to(input.dtype) RuntimeError: cusolver error: CUSOLVER_STATUS_INTERNAL_ERROR, when calling cusolverDnCreate(handle) `

vexersa commented 3 years ago

Think I've solved this, looks to be related to available video memory of my GPU.

Solved by passing -s 380 380 when running the script.

thehappydinoa commented 3 years ago

I just ran into this as well, it would be great to have this in the README. I will make a PR.