Closed mustansarfiaz closed 4 years ago
The tracker can be evaluated on the video object segmentation similarly as it is on tracking problem. There are some minor changes that have to be done:
Segmentation masks are saved to the hard drive by the tracker automatically, but you have to turn on this feature in the pytracking/parameters/default_params.py file: params.masks_save_path = 'save-masks-path' # TODO: set the path to directory where you want to save masks params.save_mask = True
Initialize the tracker by giving it the binary segmentation mask: tracker.initialize(img, gt_polygon, init_mask=mask) the gt_poly is the polygon obtained from the mask (by the min-max method or you can even use some more sophisticated approach)
Set the name of the sequence after tracker initialization: tracker.sequence_name = 'sequence1'
Each time before calling track function, make sure that you tell the tracker name of the current frame, which will be used to save prediction for the current segmentation mask, e.g.: tracker.frame_name = '%05d' % frame_index
Segmentation masks will be saved in the save-masks-path/sequence1/ directory, which you specified when creating the tracker.
How can I get the init mask and polygon ? Is it a binary matrix with the same shape as the frames?
I found your project very interesting. Your paper says that it performs tracking and segmentation. I test if for tracking but could not run it for segmentation. Could you please help me to run your project for video segmentation? For example to run over DAVIS or YouTube-VOS datasets. I am grateful for your help.