Closed amlankar closed 6 years ago
Hi Amlan,
May I ask how the first point is trained and generated i.e. how does the first point network generate an arbitrary starting point? I believe this isn't clear from either the CVPR 2017 or 2018 poly-rnn++ paper
I see, sorry about that.
The first point is generated from a separate network. This network takes in the image features tries to predict all the vertex pixels of the polygon (as a binary classification task), and all the edge pixels (again as binary classification). Both these predictions are done in two separate heads.
This is trained with simple binary cross entropy given grouth truth vertex and edge masks for the edge and vertex head. The first vertex that is passed to the LSTM at train time is the first vertex in the ground truth polygon that it is being trained against.
At test time and during RL training, we sample 1 vertex position from the predicted logits on the vertex head, and pass that to the LSTM as the starting point for the polygon.
I hope this helps!
That's really helpful, esp with regards to the output and training of the first point network.
To clarify things a little, I'm assuming the vertex head depends on the output of the edge head, based on the 2017 paper "One branch predicts object boundaries while the other takes as input the output of the boundary-predicting layer as well as the image features and predicts the vertices of the polygon."
Re LSTM training, I believe the paper did not indicate that the first GT polygon vertex was arbitrarily chosen as the first LSTM input vertex during training (instead of sampling from the model prediction). EDIT: I noticed that feeding the GT vertex during LSTM training was only mentioned roughly in the 2018 paper section 3.3
Thanks
That is correct, we also found that using them in two separate heads work well too, so it's fine whichever way you implement it.
For the LSTM training, it was obvious to us as a data augmentation, so we maybe did not mention it in the main paper due to space reasons. I agree that we could have added a lot more information, which we hope to make clear with a training code release soon.
Hi, sorry to reply so late. I cannot get a gpu to test the code until now. The result is {'motorcycle': 0.6401435543456829, 'bicycle': 0.6027971853228983, 'truck': 0.7875317978551573, 'train': 0.7095660575472281, 'car': 0.8150176570258733, 'person': 0.6011296154919364, 'bus': 0.8266491499291869, 'rider': 0.5957857106974498} given the bounding box. The given bounding box is selected to 15% larger than the groud truth bounding box. I don't know if we use the same processes and the same criteria. And I am glad to know you have released your training code. Can't wait to see your implementation.
The results look great! Did you evaluate at 28x28, or at full resolution? We do it at full resolution, and there is usually a drop in IoU at full resolution because you can only do so much while predicting at 28x28 (a reason why we used a graph net later to be able to predict at higher res)
I do evaluate at full resolution. Though instead of using the upper left point to represent a 8*8 small block, I use the center point.
Thanks! In the paper, we prepare the data by also removing occlusions from the masks -- the polygons provided directly in cityscapes might or might not have the occlusion removed, since they actually solve occlusion using depth ordering while preparing the pixel wise instance/semantic map.
I have a feeling this is the reason why your numbers are so much higher, it'd be great to see how much this repository gets using our pre-processed data! (It is included in our release)
Thanks for the great work!
Hi Alex!
Thanks for trying this out. One of the reasons why we have a first point network in our model instead of using any first point in the RNN (using a start token) is because a polygon is circularly symmetrical (which is not the case for language), and therefore the first point is not a well defined object.
It'd be interesting to know what scores you got on Cityscapes since I see you said in another issue that it was better than the polygon-rnn from CVPR 2017.
Thanks, Amlan