Open Aricling opened 1 year ago
Hi~
For VideoMatting108 dataset, you have to download frame_corr.json
and other metadata in their drive (the same place as FG/BG).
frame_corr.json
provides the FG / BG pairs.
The expected file structure is
VideoMatting108
BG_done
FG_done
train_videos.txt
val_videos.txt
frame_corr.json
Hi~ For VideoMatting108 dataset, you have to download
frame_corr.json
and other metadata in their drive (the same place as FG/BG).frame_corr.json
provides the FG / BG pairs.The expected file structure is
VideoMatting108
BG_done
(Video 0)
(Video 1)
...
FG_done
...
train_videos.txt
val_videos.txt
frame_corr.json
Okay, thank you, I solved it:) What's more, I saw that in your paper "Youtube-VIS [52], is adopted to train the trimap segmentation". But I didn't find the code generating trimap segmentation in this repo. Could you please offer me some hints that I can adopt to generate trimaps from a video? Thank you! :)
The trimap ground truth is generated at https://github.com/csvt32745/FTP-VM/blob/main/dataset/youtubevis.py#L103 The unknown (gray) region comes from (the dilation - erosion) of masks. Since the segmentation mask is imperfect for trimap, the training on YT-VIS stops a bit earlier.
I did all the steps said in the tutorial. But when running model inference(inference_dataset.py) I got the error illustrated above. Has anyone also encountered the same issue? I don't know where the frame_corr.json is.