Closed leogogogo closed 4 years ago
Hi @leogogogo,
Your result is more like for directly fine-tuning without ar
. I wonder what's the version of the code you used.
Because of my carelessness when I sort and public this code, training with ar
has a bug, and I have updated the training code a few days ago. Please check your code and let me know whether you are using the latest version.
Hi @lliuz Thank you for your quick answer, the code was from few days ago. I'll try the latest version and keep updating my result.
BTW, when I was training, I did not pay much attention to the training loss, since the unsupervised loss is too noisy for each sample. You can just focus on the validation results to determine when to stop training.
Besides, you can email me if you have any other questions.
Got it, thank you for your help and hints!
Hi, read your paper and very impressive, and thank you for sharing your code. I'm trying your code recently, maybe not dig into too much details yet, just plainly try to reproduce the fine-tune with Sintel datasets. The loss doesn't go down, and stays around 0.7, and the evaluation in epoch 68 is "EPE_0: 3.19 EPE_1: 4.22", is this normal? Because all I have done is to download official Sintel dataset and try the command "python3 train.py -c sintel_ft_ar.json", and I also use "correlation_native".