OpenGVLab / video-mamba-suite

The suite of modeling video with Mamba
MIT License
213 stars 20 forks source link

How to extract Features and Annotations from new dataset? #16

Open WAYSC opened 3 months ago

WAYSC commented 3 months ago

This is a promising project for video understanding. Your team did an excellent job! But I am wondering for new video dataset (except for THUMOS-14, ActivityNet, HACS Segment, FineAction), how to extract Features and Annotations like using InternVideo2 or Video_MAE models pretrained on Kinetics?

what I mentioned above is in this link: https://github.com/OpenGVLab/video-mamba-suite/blob/main/video-mamba-suite/temporal-action-localization/README.md

Can you provide a script for this? (though you give the feature download link) thank you for your contribution, looking for your reply :)

cg1177 commented 2 months ago

In VideoMAEv2 repo, we have released a simple implementation of feature extraction. You can find it here

CrazyGeG commented 2 months ago

Can "extract_tad_feature.py" in VideoMAEv2 used to extract features for Action Recognition task?

cg1177 commented 2 months ago

Reference in

Yes

htchentyut commented 1 month ago

where is the feature extraction code for InternVideo2-6B?

lynshwoo2022 commented 3 weeks ago

which code can be use to extract feature for action-segmentation task?

cg1177 commented 3 weeks ago

which code can be use to extract feature for action-segmentation task?

Hello, you can follow https://github.com/yabufarha/ms-tcn to extract TAS feature.