PaddlePaddle / PaddleDetection

Object Detection toolkit based on PaddlePaddle. It supports object detection, instance segmentation, multiple object tracking and real-time multi-person keypoint detection.
Apache License 2.0
12.61k stars 2.86k forks source link

关于使用视频分类的ppTSM模型进行打架识别的问题 #8554

Open GuangfuWang opened 1 year ago

GuangfuWang commented 1 year ago

问题确认 Search before asking

Bug组件 Bug Component

Deploy

Bug描述 Describe the Bug

本人根据/deploy/pipeline/docs/tutorials/pphuman_action.md中的基于视频分类的打架识别进行测试使用,但是保存的视频并没有正确分析打架的情况,保存的视频也存在问题。以下是本人的命令执行与输出:

$ python deploy/pipeline/pipeline.py --config deploy/pipeline/config/examples/infer_cfg_fight_recognition.yml --video_file=fight01.mp4 --device=gpu Warning: Unable to use numba in PP-Tracking, please install numba, for example(python3.7):pip install numba==0.56.4 Warning: Unable to use numba in PP-Tracking, please install numba, for example(python3.7):pip install numba==0.56.4 Warning: Unable to use numba in PP-Tracking, please install numba, for example(python3.7):pip install numba==0.56.4 Warning: Unable to use numba in PP-Tracking, please install numba, for example(python3.7):pip install numba==0.56.4` ----------- Running Arguments ----------- VIDEO_ACTION: batch_size: 1 enable: true frame_len: 8 model_dir: /home/wgf/Downloads/ppTSM_fight/ppTSM sample_freq: 7 short_size: 268 target_size: 320 visual: true warmup_frame: 50


VideoAction Recognition enabled VIDEO_ACTION model dir: /home/wgf/Downloads/ppTSM_fight/ppTSM E0819 10:26:11.665967 87315 analysis_predictor.cc:1716] Allocate too much memory for the GPU memory pool, assigned 8000 MB E0819 10:26:11.665987 87315 analysis_predictor.cc:1719] Try to shink the value by setting AnalysisConfig::EnableUseGpu(...) --- Running analysis [ir_graph_build_pass] I0819 10:26:11.686123 87315 executor.cc:187] Old Executor is Running. --- Running analysis [ir_analysis_pass] --- Running IR pass [map_op_to_another_pass] --- Running IR pass [identity_scale_op_clean_pass] --- Running IR pass [is_test_pass] --- Running IR pass [simplify_with_basic_ops_pass] --- Running IR pass [delete_quant_dequant_linear_op_pass] --- Running IR pass [delete_weight_dequant_linear_op_pass] --- Running IR pass [constant_folding_pass] --- Running IR pass [silu_fuse_pass] --- Running IR pass [conv_bn_fuse_pass] I0819 10:26:11.778937 87315 fuse_pass_base.cc:59] --- detected 55 subgraphs --- Running IR pass [conv_eltwiseadd_bn_fuse_pass] --- Running IR pass [embedding_eltwise_layernorm_fuse_pass] --- Running IR pass [multihead_matmul_fuse_pass_v2] --- Running IR pass [vit_attention_fuse_pass] --- Running IR pass [fused_multi_transformer_encoder_pass] --- Running IR pass [fused_multi_transformer_decoder_pass] --- Running IR pass [fused_multi_transformer_encoder_fuse_qkv_pass] --- Running IR pass [fused_multi_transformer_decoder_fuse_qkv_pass] --- Running IR pass [multi_devices_fused_multi_transformer_encoder_pass] --- Running IR pass [multi_devices_fused_multi_transformer_encoder_fuse_qkv_pass] --- Running IR pass [multi_devices_fused_multi_transformer_decoder_fuse_qkv_pass] --- Running IR pass [fuse_multi_transformer_layer_pass] --- Running IR pass [gpu_cpu_squeeze2_matmul_fuse_pass] --- Running IR pass [gpu_cpu_reshape2_matmul_fuse_pass] --- Running IR pass [gpu_cpu_flatten2_matmul_fuse_pass] --- Running IR pass [gpu_cpu_map_matmul_v2_to_mul_pass] I0819 10:26:12.021970 87315 fuse_pass_base.cc:59] --- detected 1 subgraphs --- Running IR pass [gpu_cpu_map_matmul_v2_to_matmul_pass] --- Running IR pass [matmul_scale_fuse_pass] --- Running IR pass [multihead_matmul_fuse_pass_v3] --- Running IR pass [gpu_cpu_map_matmul_to_mul_pass] --- Running IR pass [fc_fuse_pass] I0819 10:26:12.032023 87315 fuse_pass_base.cc:59] --- detected 1 subgraphs --- Running IR pass [fc_elementwise_layernorm_fuse_pass] --- Running IR pass [conv_elementwise_add_act_fuse_pass] --- Running IR pass [conv_elementwise_add2_act_fuse_pass] --- Running IR pass [conv_elementwise_add_fuse_pass] I0819 10:26:12.067709 87315 fuse_pass_base.cc:59] --- detected 55 subgraphs --- Running IR pass [transpose_flatten_concat_fuse_pass] --- Running IR pass [conv2d_fusion_layout_transfer_pass] --- Running IR pass [transfer_layout_elim_pass] --- Running IR pass [auto_mixed_precision_pass] --- Running IR pass [inplace_op_var_pass] --- Running analysis [save_optimized_model_pass] W0819 10:26:12.069797 87315 save_optimized_model_pass.cc:28] save_optim_cache_model is turned off, skip save_optimized_model_pass --- Running analysis [ir_params_sync_among_devices_pass] I0819 10:26:12.069805 87315 ir_params_sync_among_devices_pass.cc:51] Sync params from CPU to GPU --- Running analysis [adjust_cudnn_workspace_size_pass] --- Running analysis [inference_op_replace_pass] --- Running analysis [memory_optimize_pass] I0819 10:26:12.105060 87315 memory_optimize_pass.cc:222] Cluster name : pool2d_1.tmp_0 size: 1638400 I0819 10:26:12.105074 87315 memory_optimize_pass.cc:222] Cluster name : elementwise_add_2 size: 6553600 I0819 10:26:12.105077 87315 memory_optimize_pass.cc:222] Cluster name : batch_norm_12.tmp_2 size: 6553600 I0819 10:26:12.105077 87315 memory_optimize_pass.cc:222] Cluster name : leaky_relu_8.tmp_0 size: 6553600 I0819 10:26:12.105078 87315 memory_optimize_pass.cc:222] Cluster name : data_batch_0 size: 9830400 --- Running analysis [ir_graph_to_program_pass] I0819 10:26:12.156430 87315 analysis_predictor.cc:1660] ======= optimize end ======= I0819 10:26:12.157723 87315 naive_executor.cc:164] --- skip [feed], feed -> data_batch_0 I0819 10:26:12.159583 87315 naive_executor.cc:164] --- skip [linear_2.tmp_1], fetch -> fetch video fps: 30, frame_count: 1228 Thread: 0; frame id: 0 Thread: 0; frame id: 10 save result to output/fight01.mp4 ------------------ Inference Time Info ---------------------- total_time(ms): 0.0, img_num: 0 average latency time(ms): 0.00, QPS: 0.000000 `

本人使用的测试视频包括从Youtube下载及教程中的监控数据集,均没成功,卡在Inference time info处。

请问这是正常那个情况吗?本人最长等待超过1小时。

复现环境 Environment

Bug描述确认 Bug description confirmation

是否愿意提交PR? Are you willing to submit a PR?

Bradly-s commented 1 year ago

1.你好,你是用自己训练的模型进行推理,推理后保存的结果视频没有正常打架预测是么? 2.我也遇到了同样的问题 3.你训练的模型有没有进行eval?能正确得到精度吗?

zhiboniu commented 6 months ago

你好,可以在代码里加一些log看看是什么问题

lsewcx commented 5 months ago

我转换 onnxruntime 进行跑的时候也出现了个问题,解决了踢我一下

NguyenChanHung1 commented 3 days ago

Try go to PaddleDetection/deploy/pipeline/pipeline.py and add time.sleep(0.1) at the end of the while(not framequeue.empty()) loop, this works for me. I think because the thread's frame capturing is slower than the process time of the whole stuff inside the while loop, then I make it sleep more 0.1s, I think there could be a better solution for this.

framequeue = queue.Queue(10)  # line 730

thread = threading.Thread(
    target=self.capturevideo, args=(capture, framequeue))
thread.start()
time.sleep(1)

while (not framequeue.empty()):
    if frame_id % 10 == 0:
        print('Thread: {}; frame id: {}'.format(thread_idx, frame_id))
...

        if self.file_name is None:  # use camera_id
                cv2.imshow('Paddle-Pipeline', im)
                if cv2.waitKey(1) & 0xFF == ord('q'):
                    break  # line 1090
    time.sleep(0.1)