Closed SuperN1ck closed 1 month ago
Hi Nick, thanks for the question and the kind words. The training data was basically clips from the datasets mentioned in the paper and we ran Co-Tracker on these clips by specifying a grid of 400 points in the initial frame. This gives us per-timestep locations (aka tracks) of all the points in subsequent frames. For this I simply modified this script https://github.com/facebookresearch/co-tracker/blob/main/demo.py to add some mild parallelization across videos. Hope this is helpful.
Hey @homangab,
thanks a lot for the info and the reference!
Cheers, -Nick
Hey!
I really like this work! I was wondering if and in case yes, when you plan to release the processed training data or the script to generate it to retrain
Track2Act
.Looking forward to your answer! Cheers, -Nick