I have a follow-up question regarding the crowdAI dataset. I did some rough statistics on the dataset and found the maximum number of vertices in a building can be 262 (usually buildings with curved walls). I was wondering if you did some preprocessing, e.g. simplifying the shape to reduce the number of vertices? Since if the number of vertices is too large, it will be problematic for the positional refinement part, right? I assume the positional refinement part requires that the predicted polygons and the ground truth polygons should have the same number of vertices. If my assumption is wrong, how do you calculate angle loss of two polygons with different number of vertices?
Hi Stefano,
I have a follow-up question regarding the crowdAI dataset. I did some rough statistics on the dataset and found the maximum number of vertices in a building can be 262 (usually buildings with curved walls). I was wondering if you did some preprocessing, e.g. simplifying the shape to reduce the number of vertices? Since if the number of vertices is too large, it will be problematic for the positional refinement part, right? I assume the positional refinement part requires that the predicted polygons and the ground truth polygons should have the same number of vertices. If my assumption is wrong, how do you calculate angle loss of two polygons with different number of vertices?
Thanks in advance!
Best, Yuanwen