SysCV / sam-pt

SAM-PT: Extending SAM to zero-shot video segmentation with point-based tracking.
https://arxiv.org/abs/2307.01197
Apache License 2.0
950 stars 60 forks source link

About training spec #3

Closed sfchen94 closed 1 year ago

sfchen94 commented 1 year ago

Hi,

Has this model been implemented without any training process? And is the only required training process the training of HQ-SAM independently? Alternatively, please provide the GPT/memory usage details for training this model.

Thank you.

m43 commented 1 year ago

Yes, we use pre-trained checkpoints provided by the respective authors for all point trackers and SAM variants. PIPS is trained exclusively on a synthetic dataset, FlyingThings++, derived from the FlyingThings optical flow dataset. SAM has been trained on the large-scale SA-1B dataset, the largest image segmentation dataset to date. HQ-SAM has been trained on HQSeg-44K and we take its pre-trained checkpoints.

willshion commented 1 year ago

do you have wechat app for group learning