Open ywc1026 opened 5 years ago
Maybe your picture is a gray image, you should use a color picture
Maybe your picture is a gray image, you should use a color picture 都是调用MNIST那个接口,从哪里可以下载到彩色的呢?
之前改过了,transform = transforms.Compose([ transforms.ToTensor(), transforms.Normalize(mean=0.5, std=0.5)]) 又报错 too many indices for tensor of dimension 0 我把后面那个 for i, (images, _) in enumerate(data_loader): 改成 for i, images in enumerate(data_loader): 也不对
这样改就好了 transform = transforms.Compose([ transforms.ToTensor(), transforms.Normalize([0.5], [0.5])])
感谢😂
这样改就好了 transform = transforms.Compose([ transforms.ToTensor(), transforms.Normalize([0.5], [0.5])])
改成楼上确实好用
If you are still having the problem please use this code instead of above in place of tranform. transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.5,), (0.5,)) ])
@VinayMatcha transform = transforms.Compose([ transforms.ToTensor(), transforms.Normalize([0.5], [0.5])])
This is not work for me ? I still get an error on this line .I think the parentheses is wrong
@ladycatusa The snippet provided by @VinayMatcha does indeed correctly produce the correct output shape. I would recommend referencing this issue: https://github.com/fungtion/DANN/issues/8 for as to why this occurring.
I just ran into the same error message in a completely unrelated context, and changing the version of torchvision to 0.2.1 fixed it for me. Maybe this helps :)
这样改就好了 transform = transforms.Compose([ transforms.ToTensor(), transforms.Normalize([0.5], [0.5])]) 好使!感谢!请问大家都在做udacity吗 有没有学习群之类的?
Let me clarify, if the img has three channels, you should have three number for mean, for example, img is RGB, mean is [0.5, 0.5, 0.5], the normalize result is R 0.5, G 0.5, B 0.5. If img is grey type that only one channel, so mean should be [0.5], the normalize result is R 0.5
这样改就好了 transform = transforms.Compose([ transforms.ToTensor(), transforms.Normalize([0.5], [0.5])])
改成这样之后,又报错了 IndexError: invalid index of a 0-dim tensor. Use tensor.item() to convert a 0-dim tensor to a Python number 定位在 d_losses[epoch] = d_losses[epoch](i/(i+1.)) + d_loss.data[0](1./(i+1.)) how to fix it? thx
这样改就好了 transform = transforms.Compose([ transforms.ToTensor(), transforms.Normalize([0.5], [0.5])])
改成这样之后,又报错了 IndexError: invalid index of a 0-dim tensor. Use tensor.item() to convert a 0-dim tensor to a Python number 定位在 d_losses[epoch] = dlosses[epoch](i/(i+1.)) + dloss.data[0](1./(i+1.)) how to fix it? thx
If you're using pytorch>=1.0 or not? In pytorch 1.0, loss.item() replace loss.data[0] but it would just show user warning and why you got an error? Maybe a further issue.
Anyway please change your code firstly d_losses[epoch] = d_losses [ epoch ] (i/(i+1.)) + d_loss.item()(1./(i+1.))
if any issue happen please show more detail log to us.
这样改就好了 transform = transforms.Compose([ transforms.ToTensor(), transforms.Normalize([0.5], [0.5])])
改成这样之后,又报错了 IndexError: invalid index of a 0-dim tensor. Use tensor.item() to convert a 0-dim tensor to a Python number 定位在 d_losses[epoch] = d_lossesepoch + d_loss.data0 how to fix it? thx
If you're using pytorch>=1.0 or not? In pytorch 1.0, loss.item() replace loss.data[0] but it would just show user warning and why you got an error? Maybe a further issue. Anyway please change your code firstly d_losses[epoch] = d_losses [ epoch ] (i/(i+1.)) + d_loss.item()(1./(i+1.)) if any issue happen please show more detail log to us.
yes. I'm using pytorch 1.0. you means that I should change the code d_losses[epoch] = d_lossesepoch + d_loss.data0 to be code below d_losses[epoch] = d_lossesepoch + d_loss.item()(1./(i+1.)) ? thx you very much! I will try it .
这样改就好了 transform = transforms.Compose([ transforms.ToTensor(), transforms.Normalize([0.5], [0.5])])
改成这样之后,又报错了 IndexError: invalid index of a 0-dim tensor. Use tensor.item() to convert a 0-dim tensor to a Python number 定位在 d_losses[epoch] = d_lossesepoch + d_loss.data0 how to fix it? thx
If you're using pytorch>=1.0 or not? In pytorch 1.0, loss.item() replace loss.data[0] but it would just show user warning and why you got an error? Maybe a further issue. Anyway please change your code firstly d_losses[epoch] = d_losses [ epoch ] (i/(i+1.)) + d_loss.item()(1./(i+1.)) if any issue happen please show more detail log to us.
nice! it works! I replace the ".data[0]" with ".item()" and the code begins to work.
这样改就好了 transform = transforms.Compose([ transforms.ToTensor(), transforms.Normalize([0.5], [0.5])])
改成这样之后,又报错了 IndexError: invalid index of a 0-dim tensor. Use tensor.item() to convert a 0-dim tensor to a Python number 定位在 d_losses[epoch] = dlosses[epoch](i/(i+1.)) + dloss.data[0](1./(i+1.)) how to fix it? thx
If you're using pytorch>=1.0 or not? In pytorch 1.0, loss.item() replace loss.data[0] but it would just show user warning and why you got an error? Maybe a further issue.
Anyway please change your code firstly d_losses[epoch] = d_losses [ epoch ] (i/(i+1.)) + d_loss.item()(1./(i+1.))
if any issue happen please show more detail log to us.
I got following error:RuntimeError: Given groups=1, weight of size 16 3 3 3, expected input[128, 1, 28, 28] to have 3 channels, but got 1 channels instead
@ShuuTsubaki I also encounter to the problem. Do you find the way to fit it?
Downgrading torch and torchvision to 0.2.0 and 0.2.1 solved this issue for me.
这样改就好了 transform = transforms.Compose([ transforms.ToTensor(), transforms.Normalize([0.5], [0.5])])
改成这样之后,又报错了 IndexError: invalid index of a 0-dim tensor. Use tensor.item() to convert a 0-dim tensor to a Python number 定位在 d_losses[epoch] = dlosses[epoch](i/(i+1.)) + dloss.data[0](1./(i+1.)) how to fix it? thx
just remove d._loss.data[0] and write d_loss.data it works for me.
This Fixed the error for me:
transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.5,), (0.5,)) ])
transform = transforms.Compose([ transforms.ToTensor(), transforms.Normalize([0.5], [0.5])]) 为什么我还是不行
I just ran into the same error message in a completely unrelated context, and changing the version of torchvision to 0.2.1 fixed it for me. Maybe this helps :)
it is useful!
transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.5,), (0.5,)) ]) Try this I was also facing the same problem but now its done.
its working i wrote in Normalize ((0.5) ,(0.5)) as func not as matrix transform=transforms.Compose([transforms.ToTensor(), transforms.Normalize([0.5], [0.5])]) and transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.5,), (0.5,)) ]) it works also
这样改就好了 transform = transforms.Compose([ transforms.ToTensor(), transforms.Normalize([0.5], [0.5])])
worked for me. Thanks.
这样改就好了 transform = transforms.Compose([ transforms.ToTensor(), transforms.Normalize([0.5], [0.5])])
改成这样之后,又报错了 IndexError: invalid index of a 0-dim tensor. Use tensor.item() to convert a 0-dim tensor to a Python number 定位在 d_losses[epoch] = dlosses[epoch](i/(i+1.)) + dloss.data[0](1./(i+1.)) how to fix it? thx
If you're using pytorch>=1.0 or not? In pytorch 1.0, loss.item() replace loss.data[0] but it would just show user warning and why you got an error? Maybe a further issue. Anyway please change your code firstly d_losses[epoch] = d_losses [ epoch ] (i/(i+1.)) + d_loss.item()(1./(i+1.)) if any issue happen please show more detail log to us.
I got following error:RuntimeError: Given groups=1, weight of size 16 3 3 3, expected input[128, 1, 28, 28] to have 3 channels, but got 1 channels instead
Do you meet the problem that the d_loss will near to 0 gradually, which is not what we expect. who know how to fix it.
If you are still having the problem please use this code instead of above in place of tranform. transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.5,), (0.5,)) ])
Can you please explain in short? What went wrong and how did this work?
the model have three channels, only change the data loading way is useless,how to change the image to three channels?
I just transformed all the images to Grayscale and boom, it worked like charm.
This is my code: transform = transforms.Compose([transforms.Grayscale(), transforms.Resize((28,28)), transforms.ToTensor(), transforms.Normalize((0.5,), (0.5,)), ])
When I run GAN code, I got a runtime error output with shape [1, 28, 28] doesn't match the broadcast shape [3, 28, 28] how fix it