Closed bhack closed 1 year ago
Hi @bhack, we didn't try to increase the model resolution. Since the model itself is a transformer, you would also need to resize positional embeddings if you want to change the model resolution.
I've tried to increase the inference resize a bit (2x) and It seems quite stable. What do you think about increasing it at inference time? The main memory issue is that it load the full video in memory so It is not so resolution friendly.
I've just tried to run
python demo.py --grid-size 10
with CoTrackerPredictor.interp_shape
of 384x512 and 768x1024. As you can see below, 768x1024 doesn't work that well:
https://github.com/facebookresearch/co-tracker/assets/37815420/c7aabab0-cbb8-4c26-ac6f-34877ce1054f
https://github.com/facebookresearch/co-tracker/assets/37815420/c6a32ecb-17c5-4f70-bad3-d387158b91b6
What's your use-case? Why do you need a higher resolution?
I am guessing how the tracks coordinates could be good enough at your paper original training resolution (and so for high res interpolated) vs higher res to then be digest by a solver on frame sequence (e.g. Ceres/libmv etc..)
More in general I am interested in any graph that analyze the points precision metric related to the input resolution as currently I don't find any section in the Co-tracker paper.
Is the resolution of 384x512 quite arbitrary or not? Is it just derived in the nearby of existing datasets right (e.g. TAP-VID 256x256)?
If I remember correctly also DINO v2 had a short final training phase at slightly higher resolution 518 × 518
.
How much you think that the performance are capped by the selected resolution?
I know the memory constrain but at least it could be interesting to explore the correlation of perf with the input size use other attention "workaround" for the growing memory.
This is an example with your source video where I've slightly "increased the resolution" with a crop so we are not going to have side effects with the positional embedding.
https://github.com/facebookresearch/co-tracker/assets/1710528/7ce12d4d-e132-4a24-b95a-7b872cb3953e
We use the resolution 384x512 for multiple reasons, mainly being that 512x512 is the upper limit for Kubric, our training dataset. Another reason is that we wanted to have the same resolution as PIPs, our main baseline, for a fair comparison.
The model should perform better with a higher resolution, and fine-tuning can also be helpful. Unfortunately, we haven't explored this yet.
Yes I understamd that sometime the "standard" Is defined but previous work/datasets.
But It could be interesting to verify how much the resolution is going to impact the precision. Specially when have pixel perfect synth dataset where this doesn't sum up to the annotators errors.
Do you have tested any improved accuracy incrementing the resolution at: https://github.com/facebookresearch/co-tracker/blob/e84ca71ba5cbe69ee569d6d8967c93c815fbc565/cotracker/predictor.py#L23