Closed ergysr closed 7 years ago
Hi , I have checked your logfile.txt and I found it exact the same as mine. By theory, the downsample does not have any loss to compute back propagation. You may want to double check your train.prototxt and your MakefileConfig
template <typename Dtype>
void DownsampleLayer<Dtype>::Backward_cpu(const vector<Blob<Dtype>*>& top,
const vector<bool>& propagate_down, const vector<Blob<Dtype>*>& bottom) {
LOG(FATAL) << "DownsamplingLayer cannot do backward.";
}
Thanks for your reply. I found the issue to be in net.cpp where flownet makes an extra check.
if (!layer->AllowBackward()) need_backward = false;
I'm having a similar issue in FlowNet2 @ergysr -- what was the issue and how did you fix it?
Hi, @el3ment , the issue is that downsample layer does not implement the back propagation function, but in the architecture of this net, it is supposed to pass the flow error backward to previous layer.
You may do the same to bypass the backward propagation.
"in the architecture of this net" - what was done to pass the error through? I'm not sure why caffe thinks the Downsample layer needs a backprop step since I haven't made any changes to the proto file.
I am not sure about FlowNet2, but for FlowNet 1.0, if you use a different version of caffe, you may encounter some capability issue, which leads to the caffe to run backprop for downsample layer. You may want to use exact copy of FLowNet.2.0 or, you may modify the caffe cpp file like ergysr did
Hi! I am trying to train the network from scratch using the sample script
python train_flownet.py S
.This is the full log with the error on the downsampling layer that it cannot do backward. logfile.txt
I am using the master branch of caffe in case it matters. Using the pretrained model to produce a flow estimate works fine when running
demo_flownet.py S data/0000000-img0.ppm data/0000000-img1.ppm
.Any suggestions?