homangab / Track-2-Act

code for the paper Predicting Point Tracks from Internet Videos enables Diverse Zero-Shot Manipulation
Other
56 stars 4 forks source link

Release of Training Data #2

Closed SuperN1ck closed 1 month ago

SuperN1ck commented 2 months ago

Hey!

I really like this work! I was wondering if and in case yes, when you plan to release the processed training data or the script to generate it to retrain Track2Act.

Looking forward to your answer! Cheers, -Nick

homangab commented 1 month ago

Hi Nick, thanks for the question and the kind words. The training data was basically clips from the datasets mentioned in the paper and we ran Co-Tracker on these clips by specifying a grid of 400 points in the initial frame. This gives us per-timestep locations (aka tracks) of all the points in subsequent frames. For this I simply modified this script https://github.com/facebookresearch/co-tracker/blob/main/demo.py to add some mild parallelization across videos. Hope this is helpful.

SuperN1ck commented 1 month ago

Hey @homangab,

thanks a lot for the info and the reference!

Cheers, -Nick