V-Sense / ACTION-Net

Official PyTorch implementation of ACTION-Net: Multipath Excitation for Action Recognition (CVPR'21)
MIT License
198 stars 45 forks source link

Question of the paper #2

Closed kinfeparty closed 3 years ago

kinfeparty commented 3 years ago

Hello! I want to ask a question about the paper.

捕获

Do you directly use the result of C3D:Resnext101 in their paper or train the model by yourself?

Thanks!

villawang commented 3 years ago

Hello! I want to ask a question about the paper.

捕获

Do you directly use the result of C3D:Resnext101 in their paper or train the model by yourself?

Thanks!

Hi there,

For the EgoGesture, we just directly use the result from the reference [17]. For the jester, we use the pre-trained Kinetics models and retrain on the Jester, which is provided at here.

kinfeparty commented 3 years ago

Hello! I want to ask a question about the paper. 捕获 Do you directly use the result of C3D:Resnext101 in their paper or train the model by yourself? Thanks!

Hi there,

For the EgoGesture, we just directly use the result from the reference [17]. For the jester, we use the pre-trained Kinetics models and retrain on the Jester, which is provided at here.

Thanks for you answer!