Open jucic opened 2 years ago
It's exactly the same story for me. I have a scene in which only one object rotates slowly and I get these messy frames often.
One solution to this would be to train the network from the scratch using our own data. Does anyone have instructions for that?
yeah, same for my situation. The model output is a mess at some frames. The situation I'm doing it for involves a small object rotating, and a constant scene.
Here is a comparison between a good output and a messy one:
Dear all, this is because of the normalization issue, this flow_norm can solve all your problem: https://github.com/tomrunia/OpticalFlow_Visualization/pull/7
Dear all, this is because of the normalization issue, this flow_norm can solve all your problem: tomrunia/OpticalFlow_Visualization#7
oh my god! Thanks a lot! I can't beleive this simple problem should have confused me so long time
This is where the normalization takes place. I think you can display with fixed saturation by editing here.
https://github.com/princeton-vl/RAFT/blob/aac9dd54726caf2cf81d8661b07663e220c5586d/core/utils/flow_viz.py#L125-L131
look at the above picture, when the adjcent two frames are almost the same(no action or very small action), Flow result is a mess, I guess this is because there is no static scene in the training dataset of RAFT(Flyingchairs,Flyingthings,sintel and so no), so I made some dataset of static scene(optical flow always equal to 0 ) and finetuned RAFT after sintel stage of train_standard.sh,however, it doesn't imporve sufficiently(still comes to a mess in this condition sometimes), anyone knows why and how to solve this? Thanks in advance!