uzh-rpg / E-RAFT

MIT License
112 stars 19 forks source link

About training EV-FlowNet using DSEC dataset #1

Closed HTLeoo closed 2 years ago

HTLeoo commented 2 years ago

Hi Mathias, I notice that you trained EV-FlowNet on DSEC for Comparison, but I am not sure the parameters setting. I want to know if the data setting is always consistent with the one used on training E-RAFT. (e.g. the voxel bins ==15 in loader-dsec.py)

By the way, since the GT flow in test dataset is unavailable, I randomly split the training set for training and testing, and found that the domain gap caused by day/night sequential difference is obvious. Does that influence the training result a lot ?

magehrig commented 2 years ago

Hi @HTLeoo

EV-FlowNet: Yes, we used exactly the same setting.

Dataset split: We found that the night sequence leads to higher errors in the angular error metric while the EPE metric was comparable to EPE in day sequences. The EPE is mostly high for the fast-driving sequences, e.g. interlaken_00_b. For finding a good architecture and hyperparameters, a random train split is fine. It's important to do data augmention because the dataset is not so large (at least random flipping and cropping). Before you submit, I would train on the full training set and use a One-Cycle learning rate schedule and then take the last checkpoint to do a forward pass on the test samples. You can also save the last N checkpoints and submit the test samples with each checkpoint (it's ok to do this) to achieve the best possible result. The One-Cycle learning rate schedule is quite helpful, I found out with later experiments.

HTLeoo commented 2 years ago

@magehrig Thanks a lot. I will try One-Cycle learning rate schedule as well.