ygjwd12345 / GLANet

32 stars 2 forks source link

Dimensional problems of the test #2

Closed oopsboy closed 2 years ago

oopsboy commented 2 years ago

I want to output large size pictures

train: python train.py \ --dataroot ./datasets/orange2tomato \ --name orange2tomato \ --model sc \ --gpu_ids 0 \ --lambda_spatial 10 \ --lambda_gradient 0 \ --attn_layers 4,7,9 \ --loss_mode cos \ --gan_mode lsgan \ --display_port 8093 \ --patch_size 64

test: python test_fid.py \ --dataroot ./datasets/orange2tomato \ --checkpoints_dir ./checkpoints \ --name orange2tomato \ --gpu_ids 0 \ --model sc \ --num_test 1000 \ --epoch 400 \ --load_size 1024 \ --crop_size 1024 \

issues: Traceback (most recent call last): File "/home/jupyter-zhangziyi/zhangziyi/GLANet/test_fid.py", line 71, in model.data_dependent_initialize(data) File "/home/jupyter-zhangziyi/zhangziyi/GLANet/models/sc_model.py", line 123, in data_dependent_initialize self.forward() File "/home/jupyter-zhangziyi/zhangziyi/GLANet/models/sc_model.py", line 152, in forward self.fake,self.unguided_mean, self.unguided_sigma, self.posterior_mean, self.posterior_sigma, self.posterior_sample =self.netG(self.real,self.real_B,True) File "/opt/tljh/user/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl return forward_call(*input, kwargs) File "/home/jupyter-zhangziyi/zhangziyi/GLANet/models/glanet.py", line 731, in forward source_style = self.style_encoder(source) File "/opt/tljh/user/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl return forward_call(*input, *kwargs) File "/opt/tljh/user/lib/python3.9/site-packages/torch/nn/modules/container.py", line 141, in forward input = module(input) File "/opt/tljh/user/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl return forward_call(input, kwargs) File "/opt/tljh/user/lib/python3.9/site-packages/torch/nn/modules/container.py", line 141, in forward input = module(input) File "/opt/tljh/user/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl return forward_call(*input, kwargs) File "/home/jupyter-zhangziyi/zhangziyi/GLANet/models/glanet.py", line 673, in forward return self.fn(self.norm(x)) + x File "/opt/tljh/user/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl return forward_call(*input, *kwargs) File "/opt/tljh/user/lib/python3.9/site-packages/torch/nn/modules/container.py", line 141, in forward input = module(input) File "/opt/tljh/user/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl return forward_call(input, kwargs) File "/opt/tljh/user/lib/python3.9/site-packages/torch/nn/modules/conv.py", line 301, in forward return self._conv_forward(input, self.weight, self.bias) File "/opt/tljh/user/lib/python3.9/site-packages/torch/nn/modules/conv.py", line 297, in _conv_forward return F.conv1d(input, weight, bias, self.stride, RuntimeError: Given groups=1, weight of size [1024, 256, 1], expected input[1, 4096, 512] to have 256 channels, but got 4096 channels instead

ygjwd12345 commented 2 years ago

the problem should be in File "/home/jupyter-zhangziyi/zhangziyi/GLANet/models/glanet.py", line 673. please check variable Dimension ad self.norm setting.

oopsboy commented 2 years ago

the problem should be in File "/home/jupyter-zhangziyi/zhangziyi/GLANet/models/glanet.py", line 673. please check variable Dimension ad self.norm setting.

How can I solve this problem? I want to input a large-size picture output