nerdyrodent / VQGAN-CLIP

Just playing with getting VQGAN+CLIP running locally, rather than having to use colab.
Other
2.61k stars 427 forks source link

Tensor is not a torch image #20

Closed tnoya001 closed 3 years ago

tnoya001 commented 3 years ago

Hi. Thanks for the repo. I was just trying to test it, but I keep running into this:

traceback (most recent call last): file "/home/paperspace/vqgan-clip/generate.py", line 552, in train(i) file "/home/paperspace/vqgan-clip/generate.py", line 535, in train lossall = ascend_txt() file "/home/paperspace/vqgan-clip/generate.py", line 514, in ascend_txt iii = perceptor.encode_image(normalize(make_cutouts(out))).float() file "/home/paperspace/anaconda3/envs/vqgan/lib/python3.9/site-packages/torchvision/transforms/transforms.py", line 163, in call return f.normalize(tensor, self.mean, self.std, self.inplace) file "/home/paperspace/anaconda3/envs/vqgan/lib/python3.9/site-packages/torchvision/transforms/functional.py", line 201, in normalize raise typeerror('tensor is not a torch image.') typeerror: tensor is not a torch image.

Any idea how to fix it? Really appreciate any help.

nerdyrodent commented 3 years ago

Not seen that one before! What options are you using when you run the script?

tnoya001 commented 3 years ago

It happened with the basic: python generate.py -p "A painting of an apple in a fruit bowl"

But I think I've narrowed it down to either my Paperspace GPU (V100) or CUDA Toolkit version, as I also ran the script from a LambdaLabs server and it worked straight out of the box (2x.a6000)