Open sw5park opened 3 years ago
The size of input and label must be the same
I also met this problem, the input image and label are the same size, after 'scale_with' preprocessing. So, it seems a bug from the Discriminator.
Hello, how did you solve this problem later? I encountered the same problem
把图片大小改为一致就可以了
b = cv2.imread(j)
h = b.shape[0]
w = b.shape[1]
h = int ( 1024 / w * h // 32 * 32 )
b = cv2.resize(b, (1024, h))
I encountered the same problem.
My train script is
python train.py --name mydata --dataroot ./datasets/mydata --batchSize 2 --gpu_ids 4,5 --label_nc 3 --no_instance
The size of input and label are the same, they are both 512×512
I also met this problem, the input image and label are the same size, after 'scale_with' preprocessing. So, it seems a bug from the Discriminator.
In pix2pixHD_model.py: line 163
fake_image = self.netG.forward(input_concat)
# Fake Detection and Loss
pred_fake_pool = self.discriminate(input_label, fake_image, use_pool=True)
My input_label is [1, 3, 852, 1024], but the fake_image is [1, 3, 864, 1024]. I use the following way to make the code run successfully.
fake_image = fake_image[:, :, :input_label.shape[2], :]
However, I'm not sure whether it will influence the performance of the model.
I also met this problem, the input image and label are the same size, after 'scale_with' preprocessing. So, it seems a bug from the Discriminator.
In
pix2pixHD_model.py: line 163
fake_image = self.netG.forward(input_concat) # Fake Detection and Loss pred_fake_pool = self.discriminate(input_label, fake_image, use_pool=True)
My input_label is [1, 3, 852, 1024], but the fake_image is [1, 3, 864, 1024]. I use the following way to make the code run successfully.
fake_image = fake_image[:, :, :input_label.shape[2], :]
However, I'm not sure whether it will influence the performance of the model.
直接丢掉会影响,插值成要的大小影响会小一点 fake_image = F.interpolate(fake_image, size=(, ), ............)
你好,邮件已收到,祝你万事如意,生活愉快!
I am training on my own dataset, set of images and corresponding label maps.
ran python train.py --name clayseam --batchSize 4 --label_nc 2 --no_instance --dataroot /pub2/sw5park/p2phd, where dataroot folder contains 4 folders : train_label, train_img, test_label, test_img
gives the following error :
Traceback (most recent call last): File "train.py", line 71, in
Variable(data['image']), Variable(data['feat']), infer=save_fake)
File "/home/sw5park/venv/lib/python3.6/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, kwargs)
File "/home/sw5park/venv/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 153, in forward
return self.module(*inputs[0], *kwargs[0])
File "/home/sw5park/venv/lib/python3.6/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(input, kwargs)
File "/home/sw5park/generative/pix2pixHD/models/pix2pixHD_model.py", line 166, in forward
pred_fake_pool = self.discriminate(input_label, fake_image, use_pool=True)
File "/home/sw5park/generative/pix2pixHD/models/pix2pixHD_model.py", line 145, in discriminate
input_concat = torch.cat((input_label, test_image.detach()), dim=1)
RuntimeError: Sizes of tensors must match except in dimension 2. Got 560 and 546 (The offending index is 0)