Closed ParaImage closed 7 months ago
Thanks for your interest in our work. To test the model on your own video, please take the following steps to set up your video dataset and evaluate our model on it:
/video_dataset # your video dataset path
|-- JPEGImages
|-- video_name # the name of the video sequence
|-- 00000.jpg
|-- ...
|-- Annotations # remove if no GT annotations are provided
|-- video_name
|-- 00000.png
|-- ...
|-- Flows_gap1
|-- video_name
|-- 00000.flo
|-- ...
|-- Flows_gap-1
|-- video_name
|-- 00000.flo
|-- ...
flow
folder and obtain optical flows in the .flo format by running the script run_inference.py
config.py
and eval.py
. If you have GT annotations for your video, please also consider adding the colour palette information in data/colour_palette.json
. If not, set the val_gt_dir = None
in your dataset information in config.py
eval.py
on your own video dataset by following the guidance in the Inference sectiondino/eval_adaptation.py
, and follow the test-time adaptation guidance in the dino
folderThanks for your interest in our work. To test the model on your own video, please take the following steps to set up your video dataset and evaluate our model on it:
- Set up the video dataset with the following file structure:
/video_dataset # your video dataset path |-- JPEGImages |-- video_name # the name of the video sequence |-- 00000.jpg |-- ... |-- Annotations # remove if no GT annotations are provided |-- video_name |-- 00000.png |-- ... |-- Flows_gap1 |-- video_name |-- 00000.flo |-- ... |-- Flows_gap-1 |-- video_name |-- 00000.flo |-- ...
- The RGB frames in the video are extracted and should be saved in the .jpg format
- The optical flows can be obtained by following the guidance in the
flow
folder and obtain optical flows in the .flo format by running the scriptrun_inference.py
- Run the flow-based OCLR model
- Follow the steps provided in the subsection "To set up your own data" (in the Dataset preparation section). This involves configuring your data directory and dataset name in the
config.py
andeval.py
. If you have GT annotations for your video, please also consider adding the colour palette information indata/colour_palette.json
. If not, set theval_gt_dir = None
in your dataset information inconfig.py
- Run the evaluation code
eval.py
on your own video dataset by following the guidance in the Inference section
- Run the test-time adaptation
- Similarly set up your own dataset information in the
dino/eval_adaptation.py
, and follow the test-time adaptation guidance in thedino
folder
https://github.com/Jyxarthur/OCLR_model/blob/main/data/dataloader.py#L105 for the img_dir, maybe the index of self.data_dir[2] should be changed to 1 ?
Thanks for your interest in our work. To test the model on your own video, please take the following steps to set up your video dataset and evaluate our model on it:
- Set up the video dataset with the following file structure:
/video_dataset # your video dataset path |-- JPEGImages |-- video_name # the name of the video sequence |-- 00000.jpg |-- ... |-- Annotations # remove if no GT annotations are provided |-- video_name |-- 00000.png |-- ... |-- Flows_gap1 |-- video_name |-- 00000.flo |-- ... |-- Flows_gap-1 |-- video_name |-- 00000.flo |-- ...
- The RGB frames in the video are extracted and should be saved in the .jpg format
- The optical flows can be obtained by following the guidance in the
flow
folder and obtain optical flows in the .flo format by running the scriptrun_inference.py
- Run the flow-based OCLR model
- Follow the steps provided in the subsection "To set up your own data" (in the Dataset preparation section). This involves configuring your data directory and dataset name in the
config.py
andeval.py
. If you have GT annotations for your video, please also consider adding the colour palette information indata/colour_palette.json
. If not, set theval_gt_dir = None
in your dataset information inconfig.py
- Run the evaluation code
eval.py
on your own video dataset by following the guidance in the Inference section
- Run the test-time adaptation
- Similarly set up your own dataset information in the
dino/eval_adaptation.py
, and follow the test-time adaptation guidance in thedino
folderhttps://github.com/Jyxarthur/OCLR_model/blob/main/data/dataloader.py#L105 for the img_dir, maybe the index of self.data_dir[2] should be changed to 1 ?
Thanks for pointing this out. I have made corresponding updates.
How can I write demo code that get a video and segmenting moving objects from it?