Open Chuckie-He opened 4 years ago
Hi, maybe an alternative would be to use https://github.com/facebookresearch/video-long-term-feature-banks/blob/master/GETTING_STARTED.md#training-an-lfb-model to run "Train an LFB model". This would first extract the training LFB, and when it's done, you can kill the job, if you don't need training.
ok, thank you very much, I think it will help. Another question is how do you assign labels for the predicted boxes during training? Could you provide the preprocessing code? Thx
Hi @Chuckie-He , I'm sorry but unfortunately I don't preprocessing code that's available for sharing. The label assignment method is described in our paper. Please feel free to let me know if you have any questions. (The performance difference between training on GT+predicted boxes and training on GT boxes only is quite small though (maybe ~0.3 mAP IIRC ). )
Hi, I refer to the methods mentioned in lfb_loader.py for essentially feature extraction. When I use the val dataset(TEST.DATA_TYPE=val), it's ok. But when I try to extract the train dataset's feature(TEST.DATA_TYPE=train), it reports error. Could you tell me how can I solve the problem? Is it ok by changing AVA.TRAIN_BOX_LISTS = [b'ava_train_v2.1.csv', b'ava_train_predicted_boxes.csv'] to [b'ava_train_predicted_boxes.csv'] in config.py? please help, thanks!
the error is: [INFO: test_net.py: 118]: Done ResetWorkspace... [WARNING: test_net.py: 122]: Testing started... [WARNING: cnn.py: 25]: [====DEPRECATE WARNING====]: you are creating an object from CNNModelHelper class which will be deprecated soon. Please use ModelHelper object with brew module. For more information, please refer to caffe2.ai and python/brew.py, python/brew_test.py for more information. [INFO: ava.py: 98]: Finished loading annotations from
[INFO: ava.py: 101]: Number of unique boxes: 680315 [INFO: ava.py: 102]: Number of annotations: 1623819 Traceback (most recent call last): File "/home/hechujing/video-long-term-feature-banks/tools/test_net.py", line 226, in
main()
File "/home/hechujing/video-long-term-feature-banks/tools/test_net.py", line 222, in main
test_net()
File "/home/hechujing/video-long-term-feature-banks/tools/test_net.py", line 97, in test_net
test_one_crop(lfb=lfb, suffix='_final_test')
File "/home/hechujing/video-long-term-feature-banks/tools/test_net.py", line 133, in test_one_crop
test_model.build_model(lfb=lfb, suffix=suffix, shift=shift)
File "/home/hechujing/video-long-term-feature-banks/lib/models/model_builder_video.py", line 105, in build_model
shift=shift,
File "/home/hechujing/video-long-term-feature-banks/lib/datasets/dataloader.py", line 411, in get_input_db
shift=shift, lfb=lfb, suffix=suffix)
File "/home/hechujing/video-long-term-feature-banks/lib/datasets/ava.py", line 165, in init
self._get_data()
File "/home/hechujing/video-long-term-feature-banks/lib/datasets/ava.py", line 273, in _get_data
(len(self._boxes_and_labels), len(self._image_paths))
AssertionError: (236, 235)