Open xyzskysea opened 2 years ago
The YouTube-Objects dataset has already been uploaded to GoogleDirve.
In the YO2SEG, the Annotations are the groundtruth, mask folder is the results. Right?
'Annotation' is GT (0,255), 'mask' is also GT (0,1).
ok. i tried that. but found that the size of aeroplane0001/00001.png is 854x480. but in the results of HGPU it is 480x360. And i also found that in the AGNN results, the size is also 480x360. So which is the correct images and groundtruth? it's weird!
They are both correct. '480x360' is the original size and '854x480' is after we resized it.
but when i tried to use PyDavis16EvalToolbox to evaluate. it mentions that the size is not consistent. so how can i do it to reproduce the result in your paper?
The validation tool library is available at davis-matlab.
@xyzskysea Could you test on YouTube-Objects dataset to achieve the results in the paper?
The YouTube-Objects dataset has already been uploaded to GoogleDirve.
Why does the YouTube-VOS data only contain val, but not train?
How to evaluate on YouTube-Objects and Long-Videos datasets? and where to download corresponding dataset with groundtruth? could you provide the results of these two datasets?