-
## Summary
Include model action/activity/interaction recognition (video).
## Datasets
- [Kinetics400](https://paperswithcode.com/dataset/kinetics-400-1)
The dataset contains 400 human action classes,…
-
Hello,thanks for your code. I've download your code and have some troubles while running.I would appreciate it if you could tell me that what is a Full model2.pi? Or model_temporal.pt? Thank you very …
-
### What is the problem this feature will solve?
demo.py不会对视频中的动作画框标记出来,这个不是很好,我不知道它是否将正确的动作识别出来
### What is the feature?
添加这个画框功能可以增加视觉上的演示效果,方便对数据标记的调整
### What alternatives have you considered?…
-
I follow LFB of Spatio-Temporal Action Detection (Supported Methods) for pytorch2onnx, but I got one 1 issue.
Has anyone solved the issue ?
-
### The doc issue
Hi,
can someone show me how to fine tune a model for Spatio-Temporal Action Detection with a custom AVA dataset with (in my case) 6 classes?
I modified the config file by cha…
-
Hi Wang, as mentioned Thumos videos size [412 videos (200 val videos + 212 test videos, I found one test video missing)] can be downloaded from Thumos4 webpage https://www.crcv.ucf.edu/THUMOS14/downlo…
-
In fact, you can find a issue very similar to this one, check #110
lianuo said just feed st-gcn (3,t,18,1) frames, as t he used is 20, I think it's a good solution but if you directly use the pre-…
-
Hi,
I am currently experimenting with action recognition task. I am utilizing STGCN based network for action recognition in videos.
I wanted to extend the experiment by recoginising multiple action…
-
Call for Contributors: https://openmmlab.medium.com/be-an-openmmlab-contributor-c9087428062a
Here we maintain a list of welcoming features proposed by the community. Our developer may consider inte…
-
Hi,
I am trying to better understand the exact problem formulation for the Moments Queries task.
The paper (page 26 of the arxiv version) states:
> Given an egocentric video V, and a query ac…