open-mmlab / mmaction2

OpenMMLab's Next Generation Video Understanding Toolbox and Benchmark
https://mmaction2.readthedocs.io
Apache License 2.0
4.14k stars 1.22k forks source link

Roadmap of MMAction2 #19

Open hellock opened 4 years ago

hellock commented 4 years ago

We keep this issue open to collect feature requests from users and hear your voice. Our monthly release plan is also available here.

You can either:

  1. Suggest a new feature by leaving a comment.
  2. Vote for a feature request with 👍 or be against with 👎. (Remember that developers are busy and cannot respond to all feature requests, so vote for your most favorable one!)
  3. Tell us that you would like to help implement one of the features in the list or review the PRs. (This is the greatest things to hear about!)
d-li14 commented 4 years ago

I suppose it would be interesting to add CSN and X3D by FAIR into the supported model family. I also have an interest in helping implement/review them if time permits.

hellock commented 4 years ago

I suppose it would be interesting to add CSN and X3D by FAIR into the supported model family. I also have an interest in helping implement/review them if time permits.

CSN is in the plan of next release. It would be great if you would like to help with the implementation of X3D.

Amazingren commented 4 years ago

I strongly recommend adding the support for dataset FineGym99 with video dataset_type, it would be more convenient for users to validate the ideas for fine-grained action recognition or localization tasks. Hoping this would come true in a not so long future!

irvingzhang0512 commented 4 years ago

it will be nice if mmaction2 could support ava dataset and spatio-temporal action detection models.

q5390498 commented 4 years ago

it will be nice if mmaction2 can give some pretrained backbone models for user,, such as ResNet3dSlowFast and so on.

hellock commented 4 years ago

it will be nice if mmaction2 could support ava dataset and spatio-temporal action detection models.

Yes it is in the plan.

hellock commented 4 years ago

it will be nice if mmaction2 can give some pretrained backbone models for user,, such as ResNet3dSlowFast and so on.

There are already lots of pretrained models in the model zoo.

IDayday commented 4 years ago

It will be better if the model can output in video format such as mp4. I have tired the demo.py, it feedbacks text.

dreamerlin commented 4 years ago

It now supports to output video format and gif format in demo.py.

innerlee commented 4 years ago

@dreamerlin could you pls sort out all feature requests in one grand post here, so that we can easily track the status? 🏃

tianyuan168326 commented 4 years ago

Introducing Multi-Grid or mixed precision training strategy would be helpful for faster prototype iteration.

JJBOY commented 4 years ago

In the action localization task,you provided the code to get the AUC metric for action proposal evaluation. Could you also provide the classification results to get the mAP?

IDayday commented 4 years ago

It can be used to recognize real-time videos with webcamera or something else? thanks

makecent commented 3 years ago

There are many trained models in Model Zoo, while all of them are just used to test the performance of the proposed works. Do you plan to make them available for backbone pre-training? Say I may want to use the i3d pre-trained on kinetics-400 as the pre-trained backbone of my own model. It seems that we don't have much choice of pre-trained backbones except a Resnet50 on ImageNet.

dreamerlin commented 3 years ago

There are many trained models in Model Zoo, while all of them are just used to test the performance of the proposed works. Do you plan to make them available for backbone pre-training? Say I may want to use the i3d pre-trained on kinetics-400 as the pre-trained backbone of my own model. It seems that we don't have much choice of pre-trained backbones except a Resnet50 on ImageNet.

To use the pre-trained model for the whole network, the new config adds the link of pre-trained models in the load_from. See Tutorial 1: Finetuning Models # Use Pre-Trained Model and example. And to use backbone pre-training, you can change pretrained value in the backbone dict, The unexpected keys will be ignored.

makecent commented 3 years ago

There are many trained models in Model Zoo, while all of them are just used to test the performance of the proposed works. Do you plan to make them available for backbone pre-training? Say I may want to use the i3d pre-trained on kinetics-400 as the pre-trained backbone of my own model. It seems that we don't have much choice of pre-trained backbones except a Resnet50 on ImageNet.

To use the pre-trained model for the whole network, the new config adds the link of pre-trained models in the load_from. See Tutorial 1: Finetuning Models # Use Pre-Trained Model and example. And to use backbone pre-training, you can change pretrained value in the backbone dict, The unexpected keys will be ignored.

Wow! Fantastic! I think you can mention this feature somewhere in case others, like me, may don't know that they directly use pre-trained weights of the whole model for the backbone.

vikizhao156 commented 3 years ago

Could you please support X3D

dreamerlin commented 3 years ago

Could you please support X3D

Here is the X3D config files. https://github.com/open-mmlab/mmaction2/tree/master/configs/recognition/x3d

ahkarami commented 3 years ago

Could you please add Video Action/Activity Temporal Segmentation models?

ahkarami commented 3 years ago

Also, could you please add Video models on MovieNet data set?

mikeyEcology commented 3 years ago

Hi, I'm struggling to train a model using a dataset structured like the AVA dataset. Does anyone have a config file that they have used for this type of dataset that they would be willing to share? There is a code to create an ava dataset, but I haven't been able to find any config files. Otherwise, is there a different framework I can train where I have bounding boxes in the training data? Thank you

wwdok commented 3 years ago

Recently I learned about action localization/detection/segmentation(They seem to be the same thing ), it seems that it can generate a file like caption, i found it very interesting and practical. I will be very apreciate it if mmaction2 could have the action localization demo and more docs about it, thanks !

irvingzhang0512 commented 3 years ago

Very happy to have spatio-temporal action detection model today... Two related features could be very helpful:

  1. spatio-temporal action detection online/video demo.
  2. train spatio-temporal action detection models with custom categories.(eg. choose sit/stand/lie, ignore all other categories)
F9393 commented 3 years ago

Do you have a plan to add flow models for TSN and I3D?

jin-s13 commented 3 years ago

How about adding some models for temporal action segmentation?

jayleicn commented 3 years ago

Thanks for the great repo! Do you have plans adding S3D and S3D-G from https://arxiv.org/abs/1712.04851? They achieve better performance than the I3D model while runs much faster. Here is a reproduced implementation of the S3D model: https://github.com/kylemin/S3D. And for S3D-G model https://github.com/antoine77340/S3D_HowTo100M/blob/master/s3dg.py, https://github.com/tensorflow/models/blob/master/research/slim/nets/s3dg.py

sijun-zhou commented 3 years ago

Thanks in advance for this great unceasing progressing repo.

Recently, I saw that on ava-kinetics challenge, the new method 'Actor-Context-Actor Relation Network for Spatio-Temporal Action Localization' has a very good performance and take the lead of nearly 6 percent to the second place in the competition 2020. And I think is a good candidate to enrich the area of spatio temporal action localization in mmaction2.

Will you consider to include this network? I have also open a request on #641

tianxianhao commented 3 years ago

Could you please add the algorithm proposed in the paper of AVA dataset [1]. It is helpful for comparing experiment for spatio-temporal action localization when using AVA dataset. The model is consist of Faster-Rcnn and I3D.

Reference: [1] Gu C, Sun C, Ross D A, et al. Ava: A video dataset of spatio-temporally localized atomic visual actions[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018: 6047-6056.

f-guitart commented 3 years ago

Is there any plan or current work for multi modal action classification?

innerlee commented 3 years ago

@f-guitart We have audios https://github.com/open-mmlab/mmaction2/tree/master/configs/recognition_audio

irvingzhang0512 commented 3 years ago

Maybe MMAction2 could support some of the models and datasets from PytorchVideo

SubarnaTripathi commented 3 years ago

Do you plan to support Action Genome dataset and model ?

rlleshi commented 3 years ago

Add output predictions as JSON in long_video_demo.py (currently, only video is supported). #862

I have implemented this but need to polish it so that it's clean and similar to the rest of the codebase here. Will do a PR in the future.

Deep-learning999 commented 3 years ago

Hope to have Kinetics-TPS FineAction MultiSports data set support, pre-training model, training and web video inference demo

Deep-learning999 commented 3 years ago

I hope to use posec3d to realize bone-based spatiotemporal motion detection

connor-john commented 2 years ago

Add demo scripts for temporal action detection models

Was mentioned in #746 any progress?

baigwadood commented 2 years ago

Hope to have web_cam demo for posec3d in near future.

abdulazizab2 commented 2 years ago

Do you plan to add a new model to spatio-temporal action detection?

The ACRN (Actor Centric Relation Network) is great. However, ACAR adopts the previous work and builds on it with better results.

MooresS commented 1 year ago

I would appreciate it if you could add ViViT.Because I feel there are few transformer-based method for action recognition in MMaction2.

zsz00 commented 1 year ago

Hope to have two-stream dataset support , such as in slow-fast : rgb for slow-pathway and optical flow for fast-pathway. such as use rgb and optical flow in tsm .

Wrc0217 commented 1 year ago

怎么用自己的数据集跑bsn模型呢,有没有具体的步骤,我需要做些什么,谢谢