Closed jgj1986 closed 7 years ago
The reasoning for calling only self.net_s.loss
is because by definition, FlowNet-CS is created by:
1) training FlowNet-C,
2) fixing the FlowNet-C weights and appending a randomly initialized FlowNet-S,
3) training the FlowNet-S part of this newly created FlowNet-CS
It will not necessarily produce the same results to train CS in one go. If you want to do this, you should create a custom CS class based off of FlowNetCS
with the correct loss.
The flow warp operation does have a defined gradient, so the backwards pass should work. Make sure you're on the latest version of the code since I fixed this recently. Your code looks correct.
I have the same problem. Is there any possible solution for this error?
@happyfootgogo, please make sure you have the latest version of the code. If you still have this issue, please make a separate issue with all of the details.
When reading the code about FlowNet_CS, I found that the
train()
in net.py callloss()
function, which only callself.net_s.loss
, withoutself.net_c.loss()
. So theself.net_c
CANNOT be trained when training the FlowNet_CS, the code use trained FlowNet_C model asself.net_c
for FlowNet_CS. Right?You kown that the number of dataset with flow is very small, so i want the train the net with warped the images, such as:
This code need to use _flow_warp_grad(), but I got errors when running it:
So, does flow_warp allow a backward pass while training? or any problem in my code ? Thanks!