Open EDENpraseHAZARD opened 3 years ago
Hi @EDENpraseHAZARD, thank you for your interest in our work. Please follow the steps below to get optical flow:
git clone git clone -b sdcnet https://github.com/NVIDIA/semantic-segmentation.git;
Download the Code_for_Optical_Flow_Estimation.zip and unzip these files in the folder of sdcnet;
Run the shell scripts to generate optical flow: [1] CItyscapes validation set: "python Cityscapes_val_optical_flow_scale512.py --pretrained ../pretrained_models/sdc_cityscapes_vrec.pth.tar --flownet2_checkpoint ../pretrained_models/FlowNet2_checkpoint.pth.tar --source_dir ../../data/Cityscapes --target_dir Cityscapes_val_optical_flow_scale512 --vis --resize 0.5" [2] SynthiaSeq train set: "python Estimated_optical_flow_SynthiaSeq_train.py --pretrained ../pretrained_models/sdc_cityscapes_vrec.pth.tar --flownet2_checkpoint ../pretrained_models/FlowNet2_checkpoint.pth.tar --source_dir ../../data/SynthiaSeq/SEQS-04-DAWN/rgb --target_dir Estimated_optical_flow_SynthiaSeq_train --vis --resize 0.533333" [3] Viper train set: "python Estimated_optical_flow_Viper_train.py --pretrained ../pretrained_models/sdc_cityscapes_vrec.pth.tar --flownet2_checkpoint ../pretrained_models/FlowNet2_checkpoint.pth.tar --source_dir ../../data/viper --target_dir /home/dayan/gdy/adv/snapshots/Estimated_optical_flow_Viper_train--vis --resize 0.533333"
(Please note that we only use the FlowNet2 to estimate optical flow, and apply the SDCNet to check whether the estimated optical flow is right.)
And CItyscapes training set also uses Cityscapes_val_optical_flow_scale512.py. Thank you.
Dear author, I have another two questions. 1、How many gpus you used for training? 2、How many iterations are need for training? In config.py, I find two parameters about the iterations, I wonder which one leads to the final result. Also, I try to train Viper2Citys using one 1080Ti, and the total training time is about 96h under 120000 iters setting. I think the training time is so long. So I want to know the training detail.
Hi @EDENpraseHAZARD, the training details are the same as ADVENT (https://github.com/valeoai/ADVENT). For example, the total training iteration is 120k.
Thanks!
I use the folloeing shell scripts, but got error. [2] SynthiaSeq train set: "python Estimated_optical_flow_SynthiaSeq_train.py --pretrained ../pretrained_models/sdc_cityscapes_vrec.pth.tar --flownet2_checkpoint ../pretrained_models/FlowNet2_checkpoint.pth.tar --source_dir ../../data/SynthiaSeq/SEQS-04-DAWN/rgb --target_dir Estimated_optical_flow_SynthiaSeq_train --vis --resize 0.533333"
I can't get synthia-seq flow
@Dayan-Guan Thank you for open-sourcing your work!
I am trying to reproduce your results. Is it convenient for you to kindly provide the processed estimated_optical_flow for the three datasets used in your paper? Thank you!
The estimated optical flow of all datasets can be accessed via the link below: https://drive.google.com/drive/folders/1i_-yw9rS7-aa7Cn5ilIMbkUKwr1JpUFA?usp=sharing
Hi @ldkong1205 @EDENpraseHAZARD , the code of TPS [ECCV 2022] is available here. TPS is 3x faster than DA-VSN during training and notably surpasses DA-VSN during testing.
Hi @ldkong1205 @EDENpraseHAZARD , the code of TPS [ECCV 2022] is available here. TPS is 3x faster than DA-VSN during training and notably surpasses DA-VSN during testing.
Hi Dayan, congrad and thanks for your update 🎉
I use the folloeing shell scripts, but got error. [2] SynthiaSeq train set: "python Estimated_optical_flow_SynthiaSeq_train.py --pretrained ../pretrained_models/sdc_cityscapes_vrec.pth.tar --flownet2_checkpoint ../pretrained_models/FlowNet2_checkpoint.pth.tar --source_dir ../../data/SynthiaSeq/SEQS-04-DAWN/rgb --target_dir Estimated_optical_flow_SynthiaSeq_train --vis --resize 0.533333"
I can't get synthia-seq flow
Hello, have you solved this problem?
Thanks for your great job! I want to train DA-VSN, but I don't know how to get Estimated_optical_flow_Viper_train, Estimated_optical_flow_Cityscapes-Seq_train. I didn't find the detail about optical flow from readme or paper.