Closed ShiningSord closed 4 years ago
Hi, I used the resized version. Thanks!
Thanks a lot, it's helpful for me!
Hello, there is another question.
For inception-resnet V2, the input size is 299 by 299. However in dataloader_webvision.py, all input images are cropped into 227 by 227. When i run Train_webvision_parallel.py, there is a mistake as below. How can i deal with it? Thanks a lot!
Traceback (most recent call last): File "/home/zixiao/.conda/envs/torch12py37/lib/python3.7/multiprocessing/process.py", line 297, in _bootstrap self.run() File "/home/zixiao/.conda/envs/torch12py37/lib/python3.7/multiprocessing/process.py", line 99, in run self._target(*self._args, self._kwargs) File "/home/zixiao/work/DivideMix/Train_webvision_parallel.py", line 156, in test outputs1 = net1(inputs) File "/home/zixiao/.conda/envs/torch12py37/lib/python3.7/site-packages/torch/nn/modules/module.py", line 547, in call result = self.forward(*input, *kwargs) File "/home/zixiao/work/DivideMix/InceptionResNetV2.py", line 304, in forward x = self.logits(x) File "/home/zixiao/work/DivideMix/InceptionResNetV2.py", line 297, in logits x = self.avgpool_1a(features) File "/home/zixiao/.conda/envs/torch12py37/lib/python3.7/site-packages/torch/nn/modules/module.py", line 547, in call result = self.forward(input, kwargs) File "/home/zixiao/.conda/envs/torch12py37/lib/python3.7/site-packages/torch/nn/modules/pooling.py", line 551, in forward self.padding, self.ceil_mode, self.count_include_pad, self.divisor_override) RuntimeError: Given input size: (1536x5x5). Calculated output size: (1536x0x0). Output size is too small
I have updated the dataloader file, which should be able to run on the latest PyTorch. Thanks for pointing out the error.
Hi, thanks for your sharing such a cool code!
I am trying to evaluate your approach on Webvision1.0, and there are 2 different kind of datasets are available(original version and resized version).
Which kind of version did you choose in your paper?
Thanks a lot!