hi,guys, I am doing my own dataset now. But I got issues to make my own annotations. I have read an article from official website which said if more than 10 same objects are very close in one image that would be similar to semantic segmentation as object in- stance are not individually identified. So can anyone tell me how the semantic segmentation works on coco datasets! And how do I get the RLE datas like {"counts": [66916, 6, 587,..... 1, 114303], "size": [594, 640]} ? I have searched a lot, it looks coco API can decode this but no one say where is the RLE data from!
hi,guys, I am doing my own dataset now. But I got issues to make my own annotations. I have read an article from official website which said if more than 10 same objects are very close in one image that would be similar to semantic segmentation as object in- stance are not individually identified. So can anyone tell me how the semantic segmentation works on coco datasets! And how do I get the RLE datas like {"counts": [66916, 6, 587,..... 1, 114303], "size": [594, 640]} ? I have searched a lot, it looks coco API can decode this but no one say where is the RLE data from!