-
I noticed there are two similiar issues, but the difference is I have GPU. I install my caffe2 referring to https://github.com/facebookresearch/R2Plus1D/blob/master/tutorials/Installation_guide.md , m…
-
How can i convert the pretrained caffe2 R2Plus1D models listed [here] (https://github.com/facebookresearch/R2Plus1D/blob/master/tutorials/models.md) into a pytorch model for fine-tuning on a custom da…
-
Blocked by official weights not being released so far:
https://github.com/facebookresearch/VMZ/issues/87
-
Hi everyone,
The paper says that dimensions of fc layer is 400 for kinetics and 512 for pooling. Is that the right ? Does any model offers 4096 dimensions feature vector extraction ? Can we perform …
-
Hi,
When I run the trained model on test data, I always get this error:
41%|███████▍ | 525/1276 [37:39
-
Thanks for sharing the pretrained models!
What was the frame rate used by the instgram models during pretraining?
Apologies if it's written somewhere, I couldn't find it in the paper.
-
I have been trying to finetune the R(2+1)D-34 layer for clip length 32 with my own dataset. While pre-processing my data, I extracted clips of only 32 frames from each of my videos and I organized the…
-
I'm trying to train kinetics from scratch.
I'm following the tutorial training kinetics from scratch, but
in training, when entering "sh scripts/train_r2plus1d_test.sh"
error: argument --jitt…
-
```
ubuntu@ip-172-31-14-53:~/R2Plus1D$ python tools/extract_features.py --test_data=dupes_data --model_name=r2plus1d --model_depth=34 --clip_length_rgb=32 --gpus=0,1 --batch_size=4 --load_model_path=…
-
Average pooling in r2plus1d, and possibly others, raise errors when dealing with frame depths not a multiple of 8.
Suggested fix would be to change
`final_temporal_kernel = int( clip_length / 8 / …