Open dnvnxg opened 2 years ago
Hi, Xylobyte, You can not use LFB for demo unless you have completed its training and testing successfully, which means you need to generate the necessary files (i.e. the feature bank). If you think that's cumbersome, please use other configs for demo.
Hey @kennymckormick I have completed training and inference of the feature bank and I run into this issue using my trained checkpoint
Hello, I have the same problem with you. Have you solved it?
Checklist
Describe the bug
I can't infer with LFB using the demo_spatiotemporal_det.py demo file.
Reproduction
Environment
PYTHONPATH=${PWD}:$PYTHONPATH python mmaction/utils/collect_env.py
to collect necessary environment information and paste it here.TorchVision: 0.12.0 OpenCV: 4.5.5 MMCV: 1.5.0 MMCV Compiler: GCC 9.4 MMCV CUDA Compiler: 11.5 MMAction2: 0.24.0+c7b45f0
load checkpoint from http path: http://download.openmmlab.com/mmdetection/v2.0/faster_rcnn/faster_rcnn_r50_fpn_2x_coco/faster_rcnn_r50_fpn_2x_coco_bbox_mAP-0.384_20200504_210434-a5d8aa15.pth Performing Human Detection for each frame [>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>] 185/185, 14.1 task/s, elapsed: 13s, ETA: 0sLoading LFB from data/ava/lfb_half/lfb_train.pkl... Loading LFB from data/ava/lfb_half/lfb_val.pkl... LFB has been loaded on cuda:0. load checkpoint from local path: work_dirs/lfb/lfb_nl_kinetics_pretrained_slowonly_r50_4x16x1_20e_ava_rgb/best_mAP@0.5IOU_epoch_20.pth Performing SpatioTemporal Action Detection for each clip [ ] 0/185, elapsed: 0s, ETA:Traceback (most recent call last): File "/ascldap/users/dxgrieg/mmaction2/demo/demo_spatiotemporal_det.py", line 421, in
main()
File "/ascldap/users/dxgrieg/mmaction2/demo/demo_spatiotemporal_det.py", line 372, in main
result = model(
File "/ascldap/users/dxgrieg/anaconda3/envs/open-mmlab/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(*input, kwargs)
File "/ascldap/users/dxgrieg/anaconda3/envs/open-mmlab/lib/python3.10/site-packages/mmcv/runner/fp16_utils.py", line 110, in new_func
return old_func(args, kwargs)
File "/ascldap/users/dxgrieg/mmaction2/mmdetection/mmdet/models/detectors/base.py", line 174, in forward
return self.forward_test(img, img_metas, kwargs)
File "/ascldap/users/dxgrieg/mmaction2/mmdetection/mmdet/models/detectors/fast_rcnn.py", line 51, in forward_test
return self.simple_test(imgs[0], img_metas[0], proposals[0],
File "/ascldap/users/dxgrieg/mmaction2/mmdetection/mmdet/models/detectors/two_stage.py", line 183, in simple_test
return self.roi_head.simple_test(
File "/ascldap/users/dxgrieg/mmaction2/mmaction/models/heads/roi_head.py", line 79, in simple_test
det_bboxes, det_labels = self.simple_test_bboxes(
File "/ascldap/users/dxgrieg/mmaction2/mmaction/models/heads/roi_head.py", line 96, in simple_test_bboxes
bbox_results = self._bbox_forward(x, rois, img_metas)
File "/ascldap/users/dxgrieg/mmaction2/mmaction/models/heads/roi_head.py", line 33, in _bbox_forward
bbox_feat = self.shared_head(
File "/ascldap/users/dxgrieg/anaconda3/envs/open-mmlab/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl
return forward_call(input, kwargs)
File "/ascldap/users/dxgrieg/mmaction2/mmaction/models/heads/fbo_head.py", line 392, in forward
lt_feat = self.sample_lfb(rois, img_metas).to(st_feat.device)
File "/ascldap/users/dxgrieg/mmaction2/mmaction/models/heads/fbo_head.py", line 379, in sample_lfb
lt_feat_list.append(self.lfb[img_metas[ind]['img_key']].to())
KeyError: 'img_key