Open happsky opened 6 years ago
Are you using pytorch 0.4?
import torch
print(torch.__version__)
My version is 0.3.0.post4, should I update to 0.4?
Yes
After updating.
export CUDA_VISIBLE_DEVICES=0; python main.py -a inception_v3 ./cat2dog --batch-size 16 --print-freq 1 --pretrained;
=> using pre-trained model 'inception_v3'
Traceback (most recent call last):
File "main.py", line 313, in
export CUDA_VISIBLE_DEVICES=0; python main.py -a inception_v3 ./cat2dog --batch-size 16 --print-freq 1 --pretrained;
=> using pre-trained model 'inception_v3'
torch.Size([16, 3, 299, 299])
Traceback (most recent call last):
File "main.py", line 324, in
@happsky see issue 4884. Increasing input size or adapting the network architecture seems to fix this as Kernel size in later layers can get too large for the corresponding feature map.
I came across same problem, too.
Can not I train Inception_v3 model with 224x224 size?
Is there only Increasing input size solution?
I observed the following error while executing the inception v3 sample:
RuntimeError: Calculated padded input size per channel: (3 x 3). Kernel size: (5 x 5). Kernel size can't be greater than actual input size
Then I came across the PR - https://github.com/pytorch/examples/pull/268 and tried to use the fix, but doesn't work for me.
I get the following error:
return type(out)(map(gather_map, zip(*outputs)))
TypeError: __new__() missing 1 required positional argument: 'aux_logits'
python main.py -a inception_v3 ./imagenet/cat2dog --batch-size 16 --print-freq 1 --pretrained;
=> using pre-trained model 'inception_v3' Traceback (most recent call last): File "main.py", line 314, in