Closed ernestchu closed 1 year ago
Hi @ernestchu, thank you for the question!
This is indeed a problem that I was planning to fix. It can be addressed with a simple override of predicted coordinates for the queried frame. This is because, during training, the model tries to predict coordinates and visibilities for all frames, including the queried one. This happens after we sample features using the correct queried coordinates, which we use for initialization. Inference is currently performed in exactly the same way as training, which is not appropriate in this case. However, it doesn't affect the performance for all the other frames, except for the queried one.
So you were saying that the predictions on all frames actually correspond to the correct query locations. Only that the prediction error on the first frame makes it seems like the model queried the wrong points.
Yes, this is correct 🙂
Cool. Looking forward to it!
Hi, thanks for your wonderful work. However, there seems to be a fundamental issue in your method.
The problem is that you do not overwrite the estimated tracks $\hat{P}_0$ at the
query_frame
with the start location $P$ , neither do you set $\hat{v}_0$ to 1 throughout the refinement.This may results in, at the
query_frame
,The expected behavior is that if we query specific points on the
query_frame
, these points need to be visible, by definition, and should be precisely at the coordinates we give. I doubt that direct overwriting at each refinement stage is the right way to fix it, but the problem need to be addressed in order to make this work useful for downstream tasks. (especially for extremely low-level vision ones)Sorry for jumping in to such a detailed question. But I am dealing with some downstream applications that require a very strict (pixel-level) problem definition of point tracking. Here's the full video result.
https://github.com/facebookresearch/co-tracker/assets/51432514/7eb78a37-ef99-4177-b4a0-342e0c0d1051