conv1.0
conv: blob1
conv1 was added to layers
545911448128:conv_blob1 was added to blobs
add1 was added to layers
545911448416:add_blob1 was added to blobs
WARNING: CANNOT FOUND blob 8775320
8775320:extra_blob1 was added to blobs
conv1.1
batch_norm1 was added to layers
545911448272:batch_norm_blob1 was added to blobs
bn_scale1 was added to layers
conv1.2
relu1 was added to layers
545913981760:relu_blob1 was added to blobs
maxpool
max_pool1 was added to layers
545911447624:max_pool_blob1 was added to blobs
WARNING: the output shape miss match at max_pool1: input torch.Size([1, 24, 112, 112]) output---Pytorch:torch.Size([1, 24, 56, 56])---Caffe:torch.Size([1, 24, 57, 57])
This is caused by the different implementation that ceil mode in caffe and the floor mode in pytorch.
You can add the clip layer in caffe prototxt manually if shape mismatch error is caused in caffe.
stage2.0.branch1.0
conv: max_pool_blob1
conv2 was added to layers
545911447768:conv_blob2 was added to blobs
add2 was added to layers
545911418744:add_blob2 was added to blobs
WARNING: CANNOT FOUND blob 545913982624
Traceback (most recent call last):
File "hopenet_to_caffe.py", line 15, in
pytorch_to_caffe.trans_net(net,input,name)
File "./pytorch_to_caffe.py", line 786, in trans_net
out = net.forward(input_var)
File "./stable_hopenetlite.py", line 127, in forward
x = self.stage2(x)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/container.py", line 117, in forward
input = module(input)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, *kwargs)
File "./stable_hopenetlite.py", line 76, in forward
out = torch.cat((self.branch1(x), self.branch2(x)), dim=1)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(input, kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/container.py", line 117, in forward
input = module(input)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/batchnorm.py", line 113, in forward
self.num_batches_tracked = self.num_batches_tracked + 1
File "./pytorch_to_caffe.py", line 532, in _add
bottom=[log.blobs(input),log.blobs(args[0])], top=top_blobs)
File "./Caffe/layer_param.py", line 33, in init
self.bottom.extend(bottom)
TypeError: None has type NoneType, but expected one of: bytes, unicode
conv1.0 conv: blob1 conv1 was added to layers 545911448128:conv_blob1 was added to blobs add1 was added to layers 545911448416:add_blob1 was added to blobs WARNING: CANNOT FOUND blob 8775320 8775320:extra_blob1 was added to blobs conv1.1 batch_norm1 was added to layers 545911448272:batch_norm_blob1 was added to blobs bn_scale1 was added to layers conv1.2 relu1 was added to layers 545913981760:relu_blob1 was added to blobs maxpool max_pool1 was added to layers 545911447624:max_pool_blob1 was added to blobs WARNING: the output shape miss match at max_pool1: input torch.Size([1, 24, 112, 112]) output---Pytorch:torch.Size([1, 24, 56, 56])---Caffe:torch.Size([1, 24, 57, 57]) This is caused by the different implementation that ceil mode in caffe and the floor mode in pytorch. You can add the clip layer in caffe prototxt manually if shape mismatch error is caused in caffe. stage2.0.branch1.0 conv: max_pool_blob1 conv2 was added to layers 545911447768:conv_blob2 was added to blobs add2 was added to layers 545911418744:add_blob2 was added to blobs WARNING: CANNOT FOUND blob 545913982624 Traceback (most recent call last): File "hopenet_to_caffe.py", line 15, in
pytorch_to_caffe.trans_net(net,input,name)
File "./pytorch_to_caffe.py", line 786, in trans_net
out = net.forward(input_var)
File "./stable_hopenetlite.py", line 127, in forward
x = self.stage2(x)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/container.py", line 117, in forward
input = module(input)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, *kwargs)
File "./stable_hopenetlite.py", line 76, in forward
out = torch.cat((self.branch1(x), self.branch2(x)), dim=1)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(input, kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/container.py", line 117, in forward
input = module(input)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 722, in _call_impl
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/batchnorm.py", line 113, in forward
self.num_batches_tracked = self.num_batches_tracked + 1
File "./pytorch_to_caffe.py", line 532, in _add
bottom=[log.blobs(input),log.blobs(args[0])], top=top_blobs)
File "./Caffe/layer_param.py", line 33, in init
self.bottom.extend(bottom)
TypeError: None has type NoneType, but expected one of: bytes, unicode