Open jdyjjj opened 1 year ago
I'd also like to know! Is there an "easy" way to take the tracked points and re-train/re-fine the model?
Hi @jdyjjj, @horsto, I think the easiest way to train on a custom dataset right now is to adapt this class to your use case: https://github.com/facebookresearch/co-tracker/blob/4f297a92fe1a684b1b0980da138b706d62e45472/cotracker/datasets/kubric_movif_dataset.py#L386
I'm not sure if this is considered an easy way, though :)
Hi @nikitakaraevv, I adapted the KubrikMovifDataset to a local dataset. In my case, I have 180 points to track per video. However, when sampling data with augmentations, I sometimes end up with fewer points. I these cases, the getitem_helper function return gotit=False. Is that a wished behaviour?
Thanks for your answers!
Hi @Anderstask1, some points don't satisfy the sampling criteria: being visible in the first frame or the middle frame of the sequence. That's why we sometimes end up with fewer points, especially if we don't have additional points to sample from. I think you can modify the sampling criteria to make it less strict: https://github.com/facebookresearch/co-tracker/blob/8d364031971f6b3efec945dd15c468a183e58212/cotracker/datasets/kubric_movif_dataset.py#L468
Hi again @nikitakaraevv, I modified the sampling criteria to make it less strict. However, during training, it seems like several points are sampled mid-sequence in the provided prediction videos (even though the ground truth video don't sample points mid-sequence). Do you know the reason for this?
Hi @Anderstask1 @jdyjjj @horsto guys, did you successfully train on your own datasets? I'd appreciate it if you guys could share me some advice.
I never tried, but I'd be very curious to know too!
Great job, I have benefited a lot. I would like to train on the videos I have collected. What should I do if I want to train with my video dataset? Thank you.