Zhaoyi-Yan / Shift-Net_pytorch

Pytorch implementation of Shift-Net: Image Inpainting via Deep Feature Rearrangement (ECCV, 2018)
http://openaccess.thecvf.com/content_ECCV_2018/papers/Zhaoyi_Yan_Shift-Net_Image_Inpainting_ECCV_2018_paper.pdf
MIT License
363 stars 83 forks source link

Guide Loss: how is target passed to InnerCosFunction #123

Closed Emy-cv closed 4 years ago

Emy-cv commented 4 years ago

Thank you for your amazing work. I am confused about the implementation of guide loss. Could you tell me how is the target got? In the first iteration, the ground truth is passed to the net. Then the real target is included in the former part of in_data, and the target now is 1. But in the InnerCosFunction, the target is expanded with its own value 1. Is that right? image image

Thank you very much for your time.

Emy-cv commented 4 years ago

Should expand_as change to clone?

Zhaoyi-Yan commented 4 years ago

The first loop: 2 forward+ 1 backward pass is the whole training procedure.

1st forward: target is a tensor with a dummy value(any value is fine), as in this forward, we just get the real target by passing the corresponding gt. (The guidance loss here is useless, we DO NOT backward this loss) 2nd forward: the corrupted image is passed, the target is what we need, the loss is just defined as expceted. Although it also generates a useless target(generated by passing the corrupted image), the newly generated target is useless.

1 backward, the guidance loss works as expected.

A new loop: 1st forward: target is a tensor generated in the former loop (generated by passing the corrupted images in the previous batch). ...

Emy-cv commented 4 years ago

The first loop: 2 forward+ 1 backward pass is the whole training procedure.

1st forward: target is a tensor with a dummy value(any value is fine), as in this forward, we just get the real target by passing the corresponding gt. (The guidance loss here is useless, we DO NOT backward this loss) 2nd forward: the corrupted image is passed, the target is what we need, the loss is just defined as expceted. Although it also generates a useless target(generated by passing the corrupted image), the newly generated target is useless.

1 backward, the guidance loss works as expected.

A new loop: 1st forward: target is a tensor generated in the former loop (generated by passing the corrupted images in the previous batch). ...

Thank so much for your clear explanation. I understand now. In the first forward, the target is saved in self.target by the line following InnerCosFunction.apply. Information saved by ctx.save_for_backward(input, target, mask) in the first forward is not used.