lucastabelini / PolyLaneNet

Code for the paper entitled "PolyLaneNet: Lane Estimation via Deep Polynomial Regression" (ICPR 2020)
https://arxiv.org/abs/2004.10924
MIT License
301 stars 75 forks source link

Tensorflow implementation #22

Closed LUUTHIENXUAN closed 4 years ago

LUUTHIENXUAN commented 4 years ago

Hello. You did a very good job. I am trying my own implementation in Tensorflow in Google Colab but it is not working as expected. The loss kept around 78 and could not decrease anymore. Could you have sometime for reviewing?

lucastabelini commented 4 years ago

No, unfortunately.

LUUTHIENXUAN commented 4 years ago

Never mind. I knew the reasons. Now the total loss reached around 4~5.

Anyway, Can you explain your code as below? def forward(self, x, epoch=None, **kwargs): output, extra_outputs = self.model(x, **kwargs) for i in range(len(self.curriculum_steps)): if epoch is not None and epoch < self.curriculum_steps[i]: output[-len(self.curriculum_steps) + i] = 0 return output, extra_outputs

Particularly, why we need to zero one feature before feeding to loss function as below? output[-len(self.curriculum_steps) + i] = 0

lucastabelini commented 4 years ago

That's just a leftover from some experiments I did with curriculum learning. That line has no effect with the default config files used, you can ignore it (or remove).

LUUTHIENXUAN commented 4 years ago

After a few trials, I could not get lower loss value as your logs. I have reviewed your code many times but could not get it right.

Here is your loss calculation:

# applying weights to partial losses
poly_loss = poly_loss * poly_weight
lower_loss = lower_loss * lower_weight
upper_loss = upper_loss * upper_weight
cls_loss = cls_loss * cls_weight
conf_loss = bce(pred_confs, target_confs) * conf_weight

loss = conf_loss + lower_loss + upper_loss + poly_loss + cls_loss

return loss, {
            'conf': conf_loss,
            'lower': lower_loss,
            'upper': upper_loss,
            'poly': poly_loss,
            'cls_loss': cls_loss
        }

Here is your loss parameters:

loss_parameters:
conf_weight: 1
lower_weight: 1
upper_weight: 1
cls_weight: 0
poly_weight: 300

Here is your loss calculated values: [2020-04-03 21:41:01,371] [INFO] Epoch [1/2695], Step [1/227], Loss: 105.9413 (upper: 0.8436, lower: 0.1043, poly: 104.3011, conf: 0.6923), s/iter: 0.5786, lr: 3.0e-04 [2020-04-03 21:41:01,776] [INFO] Epoch [1/2695], Step [2/227], Loss: 83.7656 (upper: 0.7416, lower: 0.0899, poly: 60.0729, conf: 0.6855), s/iter: 0.4678, lr: 3.0e-04 [2020-04-03 21:41:02,159] [INFO] Epoch [1/2695], Step [3/227], Loss: 68.2684 (upper: 0.6549, lower: 0.0667, poly: 35.8984, conf: 0.6538), s/iter: 0.4277, lr: 3.0e-04 [2020-04-03 21:41:02,516] [INFO] Epoch [1/2695], Step [4/227], Loss: 57.8521 (upper: 0.5872, lower: 0.0476, poly: 25.3292, conf: 0.6394), s/iter: 0.4079, lr: 3.0e-04

with poly_weight: 300, how did you calculate Loss as above? Should it as poly_weight: 1?

lucastabelini commented 4 years ago

That poly loss value being printed is after multiplying it by poly_weight. If you sum all loss components (upper + lower + poly + conf) you'll see that, for the first line, it equals 105.9413, as printed. That doesn't mean that the poly_weight equals 1, it's just that the printed value is post-multiplication.

LUUTHIENXUAN commented 4 years ago

My bad. I have noticed the same thing after posting the comment.

lucastabelini commented 4 years ago

No problem :) Feel free to ask any more questions you have.