Open andyhahaha opened 5 years ago
Hi, andyhahaha
Thank you for your remind. The question is that the Reduction Layer in Caffe only reduction along ALL "tail" axes is supported. See http://caffe.berkeleyvision.org/tutorial/layers/reduction.html. So there is no directly way to transfer the sum(dim=1, keepdim=True)
There is one way to solve this problem. Replacing the forward function of LRN class in Pytorch with a nn_tools unsupported operation such as F.relu6(), that this layer will transfered to a Python Layer. So you can change the type of these layer to LRN in transferred caffe prototxt manually.
Wish this will help you.
Zhihang
Hi, Zhihang
Thanks for reply! But I think this code haven't run to the sum(dim=1, keepdim=True) yet. I set the pdb and found that it stop at the function "_add", between x.pow(2).sum(dim=1, keepdim=True).sqrt() and self.eps.
After _add function run to top_blobs = log.add_blobs([x], name='add_blob') The error occur at layer = caffe_net.Layer_param(name=layer_name, type='Eltwise', bottom=[log.blobs(input),log.blobs(args[0])], top=top_blobs) log.blobs(input),log.blobs(args[0]) raise KeyError: bug Do you know why it would failed at add function?
Thanks!
Oh. That is in _add function, the float adding was not considered. Only the Element wise adding is considered. I have solved this problem at the latest master branch.
It works! Thanks. But after that I encounter another issue. This error message said there is operation not implemented. Is it means cat or view? But I can found these two function implemented in your code.
Thanks for help!
Sorry! I found that it is caused by permute function.
Hi, Zhihang I do the permute by myself. It seems pretty good. But after permute, pytorch need contiguous() before view(). contiguous raise this bug again. WARNING: CANNOT FOUND blob at layer None, this may cause a NoneType Error. This may caused by the previous operation which produce the blob(tensor) is not implemented in nn_tools. You can issue this at https://github.com/hahnyuan/nn_tools/issues.
Caffe don't need contiguous function. How can I get trough it. It's almost done for my model. Thanks for help!
Caffe don't need contiguous function. How can I get trough it. It's almost done for my model. Thanks for help!
Hi, I met the similar situation, did you convert pytorch model to caffe model successfully?
I get rid of continuous functions and use reshape instead of view. View can only operate on continuous tensor so it needs continuous before it. Reshape don't need continuous.
I'm trying convert RefineDet to caffe model. RefineDet only contain regular CNN module(conv2d, convtranspose2d, maxpool, batchnorm). https://github.com/luuuyi/RefineDet.PyTorch However, pytorch2caffe failed at add operation as follow:
This add operation is from here https://github.com/luuuyi/RefineDet.PyTorch/blob/0e4b24ce07245fcb8c48292326a731729cc5746a/layers/modules/l2norm.py#L20 Why pytorch2caffe failed at a simple operation? Thanks for help!