Closed yuzhaoluone closed 1 year ago
Thanks for the questions. The vector format GT labels are already provided by our dataset. Sorry I cannot provide the relevant data processing code.
But from my perspective, this should not be a too hard problem. You could (1) extract the skeleton of the binary segmentation label (2) find intersection points (these points have more than 2 neighbor pixels) and end points (these points have 1 neighbor pixel). In this way you can create the graph of the binary map. This idea should not be hard to implement. You could have a try.
Thanks for the questions. The vector format GT labels are already provided by our dataset. Sorry I cannot provide the relevant data processing code.
But from my perspective, this should not be a too hard problem. You could (1) extract the skeleton of the binary segmentation label (2) find intersection points (these points have more than 2 neighbor pixels) and end points (these points have 1 neighbor pixel). In this way you can create the graph of the binary map. This idea should not be hard to implement. You could have a try.
Thank you very much for your reply, I will try it according to your suggestion, thank you.
Hi @TonyXuQAQ, I appreciate it if you could help me for preparing own training dataset. I tried to follow your suggestion but I am facing another issue. (1) When extracting the skeleton from the binary segmentation image, I cannot get the straight line as your ground truth dataset. (2) how can I add extra nodes between two pivot points like your ground truth image (*_gt_rgb.png), right now I only get intersection points and dead-end points.
Thanks for the questions.
Those ground truth labels are provided by the raw dataset (released by sat2graph) and I do not know the exact way to generate them. If there are overlapped roads in your raw binary images (e.g. overpass, bridge), it would be difficult to generate the needed labels for our method.
For your questions, based on my own experience, after obtaining the skeleton, you could first find pivot points (e.g., intersection points and dead-end points), and then sample some extra points between two pivot points every several pixels on the skeleton. In this way, you could get a vector image similar with (*_gt_rgb.png). You could provide some example visualizations if you have further issues.
Thanks for the questions.
Those ground truth labels are provided by the raw dataset (released by sat2graph) and I do not know the exact way to generate them. If there are overlapped roads in your raw binary images (e.g. overpass, bridge), it would be difficult to generate the needed labels for our method.
For your questions, based on my own experience, after obtaining the skeleton, you could first find pivot points (e.g., intersection points and dead-end points), and then sample some extra points between two pivot points every several pixels on the skeleton. In this way, you could get a vector image similar with (*_gt_rgb.png). You could provide some example visualizations if you have further issues.
Thank you for your support. I was finally able to generate our own dataset. I'll close this topic
Thank you very much for sharing the source code. It is a great job. I currently want to train the RNGDet++ model on my dataset, but my ground-truth data is a road binary map in segmentation format, and I would like to know how to convert it to graph format. Could you please share the relevant data processing code? Enables me to create the sample points and the refined ground truth graphs (_refine_gt_graph.p) with a binary image. Thanks.