Open luohao123 opened 1 month ago
Hi @luohao123, thank you for the question! The model currently only tracks points within a video, starting from any frame. If the picture you're referring to is not a part of a video, it's not going to work with the current implementation. It should be possible to implement this without retraining the model though.
Answering the second question, I think it depends on how exactly you formulate the problem. CoTracker can be useful for this task but it might not be the best solution
@nikitakaraevv. Hello. Thank you for your dedicated answer. Regarding the second question, I am currently using feature matching methods such as SuperPoint and SuperGlue. The question itself is similar to finding the time shift of two videos in the same scene but with different views. Since CoTracker has a good ability in tracking and feature descriptor extraction, I am wondering if CoTrack can handle this question better.
Specifically, handling videos in the same scene but with different views is useful in visual positioning as well. That is to say, finding the most similar frame from video 1 among video 2. Do you have any insights on how CoTrack would deal with this problem?
Say I have my dogs picture, and my dogs video, is that possible trakcing the dog in video with first picture?
Or, does it able to found the sync period of 2 videos that takes in same place but with different angle? (if their time shifted, could be use co-tracker to find the async moment of these two)