Closed youngprogrammerBee closed 10 months ago
For the first question, yes! And if it does not run, you may have to change the path of the checkpoint.
For the custom model to be used in run_on_video, you should train the model to only utilize the CLIP features which can be modified in the shell script. You can do this by simply commenting the slowfast feature path.
Thanks for prompt respond!
Do you mean that I just comment the code in train.sh
like the picture below? Then the model I get could be used to replace the model you given in run_on_videos
?
May I ask that the reason for this is that the author used ClipFeatureExtractor
in the code, so we have to train the model only use the CLIP features to be used in run_on_video?
Yes, you are right. That is because CLIP provides easy implementation and checkpoints to be employed anywhere.
Thanks!!!
hi and I have one more question Does the model have a requirement for the number of video frames input? I found that the most supported video clips are 150s. But is there a limit on the frame number, which must be 30fps or 60fps or some other value, or is it OK as long as the clip is at 150s. will the different frame number affect the experimental results?
Training has been done with 0.5 fps Thus it may not perform well in case of high fps though I haven’t tried on my own
I guess there might be an error since we didn’t set position parameters that long. (There might be a limit in the number of frames)
Hi and appreciate your work! I find that you have your model
qd_detr_ckpt
in therun_on_video
, and from project page I know maybe I can use the same command inmoment_detr
to run therun.py
. is it true? And my second question is, how to train my model so it can be used asmy_ckpt
to replace the model inrun_on_video
, that is to say, how to train my own model (use yourqd_detr
train method) to be used in "run_on_video"?