Open pinkfloyd06 opened 6 years ago
Yes, you can do that absolutely by modifying code. Specifically, you should modify dataset.py to input one frame each time.
Thank you for your answer @shuangshuangguo, but l can't find dataset.py in https://github.com/shuangshuangguo/caffe2pytorch-tsn
Thank you
This repo just converse caffe model to pytorch model, thus does not contain the TSN source code. You can find dataset.py in https://github.com/yjxiong/tsn-pytorch.
@shuangshuangguo Thank you for your answer,where can i find the code to extract features?
Hello @shuangshuangguo
Let me first thank you for the conversion.
Can l extract features representation (from last fully connected layer) for each frame (rather than for the whole video ) for the RGB stream and optical flow stream ?
Thank you a lot