Dayan-Guan / DA-VSN

Code for <Domain Adaptive Video Segmentation via Temporal Consistency Regularization> in ICCV 2021
MIT License
43 stars 8 forks source link

Optical flow for training #1

Open EDENpraseHAZARD opened 3 years ago

EDENpraseHAZARD commented 3 years ago

Thanks for your great job! I want to train DA-VSN, but I don't know how to get Estimated_optical_flow_Viper_train, Estimated_optical_flow_Cityscapes-Seq_train. I didn't find the detail about optical flow from readme or paper.

Dayan-Guan commented 3 years ago

Hi @EDENpraseHAZARD, thank you for your interest in our work. Please follow the steps below to get optical flow:

  1. git clone git clone -b sdcnet https://github.com/NVIDIA/semantic-segmentation.git;

  2. Download the Code_for_Optical_Flow_Estimation.zip and unzip these files in the folder of sdcnet;

  3. Run the shell scripts to generate optical flow: [1] CItyscapes validation set: "python Cityscapes_val_optical_flow_scale512.py --pretrained ../pretrained_models/sdc_cityscapes_vrec.pth.tar --flownet2_checkpoint ../pretrained_models/FlowNet2_checkpoint.pth.tar --source_dir ../../data/Cityscapes --target_dir Cityscapes_val_optical_flow_scale512 --vis --resize 0.5" [2] SynthiaSeq train set: "python Estimated_optical_flow_SynthiaSeq_train.py --pretrained ../pretrained_models/sdc_cityscapes_vrec.pth.tar --flownet2_checkpoint ../pretrained_models/FlowNet2_checkpoint.pth.tar --source_dir ../../data/SynthiaSeq/SEQS-04-DAWN/rgb --target_dir Estimated_optical_flow_SynthiaSeq_train --vis --resize 0.533333" [3] Viper train set: "python Estimated_optical_flow_Viper_train.py --pretrained ../pretrained_models/sdc_cityscapes_vrec.pth.tar --flownet2_checkpoint ../pretrained_models/FlowNet2_checkpoint.pth.tar --source_dir ../../data/viper --target_dir /home/dayan/gdy/adv/snapshots/Estimated_optical_flow_Viper_train--vis --resize 0.533333"

(Please note that we only use the FlowNet2 to estimate optical flow, and apply the SDCNet to check whether the estimated optical flow is right.)

EDENpraseHAZARD commented 3 years ago

And CItyscapes training set also uses Cityscapes_val_optical_flow_scale512.py. Thank you.

EDENpraseHAZARD commented 3 years ago

Dear author, I have another two questions. 1、How many gpus you used for training? 2、How many iterations are need for training? In config.py, I find two parameters about the iterations, I wonder which one leads to the final result. image Also, I try to train Viper2Citys using one 1080Ti, and the total training time is about 96h under 120000 iters setting. I think the training time is so long. So I want to know the training detail. image

Dayan-Guan commented 3 years ago

Hi @EDENpraseHAZARD, the training details are the same as ADVENT (https://github.com/valeoai/ADVENT). For example, the total training iteration is 120k.

EDENpraseHAZARD commented 2 years ago

Thanks!

EDENpraseHAZARD commented 2 years ago

I use the folloeing shell scripts, but got error. [2] SynthiaSeq train set: "python Estimated_optical_flow_SynthiaSeq_train.py --pretrained ../pretrained_models/sdc_cityscapes_vrec.pth.tar --flownet2_checkpoint ../pretrained_models/FlowNet2_checkpoint.pth.tar --source_dir ../../data/SynthiaSeq/SEQS-04-DAWN/rgb --target_dir Estimated_optical_flow_SynthiaSeq_train --vis --resize 0.533333"

����

I can't get synthia-seq flow

ldkong1205 commented 2 years ago

@Dayan-Guan Thank you for open-sourcing your work!

I am trying to reproduce your results. Is it convenient for you to kindly provide the processed estimated_optical_flow for the three datasets used in your paper? Thank you!

Dayan-Guan commented 2 years ago

The estimated optical flow of all datasets can be accessed via the link below: https://drive.google.com/drive/folders/1i_-yw9rS7-aa7Cn5ilIMbkUKwr1JpUFA?usp=sharing

Dayan-Guan commented 2 years ago

Hi @ldkong1205 @EDENpraseHAZARD , the code of TPS [ECCV 2022] is available here. TPS is 3x faster than DA-VSN during training and notably surpasses DA-VSN during testing.

ldkong1205 commented 2 years ago

Hi @ldkong1205 @EDENpraseHAZARD , the code of TPS [ECCV 2022] is available here. TPS is 3x faster than DA-VSN during training and notably surpasses DA-VSN during testing.

Hi Dayan, congrad and thanks for your update 🎉

ZHE-SAPI commented 1 year ago

I use the folloeing shell scripts, but got error. [2] SynthiaSeq train set: "python Estimated_optical_flow_SynthiaSeq_train.py --pretrained ../pretrained_models/sdc_cityscapes_vrec.pth.tar --flownet2_checkpoint ../pretrained_models/FlowNet2_checkpoint.pth.tar --source_dir ../../data/SynthiaSeq/SEQS-04-DAWN/rgb --target_dir Estimated_optical_flow_SynthiaSeq_train --vis --resize 0.533333"

����

I can't get synthia-seq flow

Hello, have you solved this problem?