Closed chenusc11 closed 3 years ago
Sorry for my late reply...
This is caused by the difference in the training procedure between mine and the original one. The training reported in the original paper uses FlyingChairs and FlyingThigs3D datasets (see Table1 in https://arxiv.org/abs/2003.12039) for pre-training. I have trained the RAFT model only on MPI-Sintel or FlyginChairs, and FlyingThings3D is too large for me to train the model in my personal GCP environment 💸 ... This lack of training may cause the above checker-board effect.
I have described other notes at the bottom of the README. Thanks! 😄
Hi, thank you very much for your reply! That is also one of my guesses. According to the paper source code, it's trained in the order of Chairs->Things3D->Sintel. I'll double-check on that see if this will be fixed.
Have a good day :)
Double-check sounds awesome! I'm looking forward to seeing the results 😃 .
Hi, @daigo0927 thank you for your decent implementation of the TF version. It was fun playing with it.
However, I noticed that with your pre-trained Sintel checkpoints. It tends to produce Chess Board Effect. (TF version (top) vs. Paper Pytorch version (bottom))
Do you know where does it come from? Thanks!