fidler-lab / polyrnn-pp-pytorch

PyTorch training/tool code for Polygon-RNN++ (CVPR 2018)
703 stars 106 forks source link

Lovasz softmax instead of RL finetuning? #25

Closed shivamsaboo17 closed 5 years ago

shivamsaboo17 commented 5 years ago

Lovasz softmax as described in this paper (https://arxiv.org/pdf/1705.08790.pdf) is differentiable loss and can be used to optimize the intersection over union. Did you try to use it instead of RL fine-tuning? What can we expect if we use this instead of the RL finetuning?

amlankar commented 5 years ago

The problem of rendering a polygon into a mask differentiably still remains an issue in this case afaik.

In our curve-GCN paper, we differentiably rendered the polygon into a mask and then used a loss function in pixel space to train.

shivamsaboo17 commented 5 years ago

Thanks for replying. I was following the curve-GCN paper and had a doubt regarding how you kept the rendering process differentiable? More precisely I was not able to relate the code (TriRender2d class) and the paper. Can you please briefly describe what is actually happening in the code (how triangles are rendered in pytorch so that we can utilize the autograd?) or provide a resource that would help me to understand this process better? Thanks!

amlankar commented 5 years ago

Closing here since the discussing is being followed on https://github.com/fidler-lab/curve-gcn/issues/6