Open DazzlingGalaxy opened 1 day ago
@TingquanGao ,更新一下,之前报错的文件夹是paddle/infer_weights/STGCN,我发现它其实是文档中的摔倒识别
。摔倒识别和打架识别昨天我都试了,可能是先试的摔倒识别,设置infer_cfg_pphuman.yml
中SKELETON_ACTION
的enable: True,然后发现报错,没把它再设为False就接着把打架识别的VIDEO_ACTION
设为True了,所以昨天我说想用打架识别但报错STGCN文件夹。
今天我做了下面的操作:
1.基于骨骼点的行为识别
之摔倒识别
,只开启SKELETON_ACTION
,其他都是False,报错;
2.基于图像分类的行为识别
之行人检测
,只开启ID_BASED_CLSACTION
,其他都是False,报错;
3.基于检测的行为识别
之行人检测
,只开启ID_BASED_DETACTION
,其他都是False,报错;
4.基于视频分类的行为识别
之打架识别
,只开启VIDEO_ACTION
,其他都是False,正常,输入一个视频文件,会输出一个视频文件,并在打架的时刻标出置信度。
前三次的报错都和昨天报错STGCN文件夹类似,都是paddle/infer_weights/xxx
这种。
我想问几点: 1.前三种报错,如何解决? 2.打架识别,只能输入一个视频文件吗?我试了一张图片会报错
File "deploy/pipeline/pipeline.py", line 1321, in <module>
main()
File "deploy/pipeline/pipeline.py", line 1308, in main
pipeline.run_multithreads()
File "deploy/pipeline/pipeline.py", line 179, in run_multithreads
self.predictor.run(self.input)
File "deploy/pipeline/pipeline.py", line 533, in run
self.predict_video(input, thread_idx=thread_idx)
File "deploy/pipeline/pipeline.py", line 986, in predict_video
if frame_id % sample_freq == 0:
ZeroDivisionError: integer division or modulo by zero
3.打架识别完成后,除了手动打开生成的视频文件,怎么知道视频中有没有打架? 4.我的实际需求是,我有一些摄像头,要么我直接从摄像头获取流数据或画面(图片),要么其他人去摄像头获取,我再从他那去获取,然后我识别画面中有没有打架。PaddleDetection能直接识别摄像头数据吗(不管是我直接从摄像头获取,还是我从其他人那获取)?能的话有没有例子?不能的话,我应该频繁将摄像头画面保存成图片,然后合成一小段视频(比如5分钟的视频),再传给PaddleDetection?这样时效性感觉有点滞后。还是说有其他更好的办法?
针对1.模型文件其实是在的,文件夹中有文件(昨天也发了图片,但没显示出来,只显示了链接),是自动下载的,但还是报错,比如这些
针对3①.怎么修改代码?目前我是用subprocess.run()
去运行deploy/pipeline/pipeline.py
,然后获取打印内容,再判断打印内容中有没有video_action_res: {'class': 1, 'score': 0.7190255}
,有的话说明有打架。
针对3②.运行deploy/pipeline/pipeline.py
后有一些打印,意思是frame id
在40-50
和100-110
之间时有打架吗?
video fps: 30, frame_count: 160
Thread: 0; frame id: 0
Thread: 0; frame id: 10
Thread: 0; frame id: 20
Thread: 0; frame id: 30
Thread: 0; frame id: 40
W1105 09:04:42.407253 12872 gpu_resources.cc:119] Please NOTE: device: 0, GPU Compute Capability: 8.6, Driver API Version: 12.3, Runtime API Version: 11.8
W1105 09:04:42.407253 12872 gpu_resources.cc:164] device: 0, cuDNN Version: 8.9.
I1105 09:04:42.407253 12872 program_interpreter.cc:243] New Executor is Running.
video_action_res: {'class': 1, 'score': 0.5839483}
Thread: 0; frame id: 50
Thread: 0; frame id: 60
Thread: 0; frame id: 70
Thread: 0; frame id: 80
Thread: 0; frame id: 90
Thread: 0; frame id: 100
video_action_res: {'class': 1, 'score': 0.7190255}
Thread: 0; frame id: 110
Thread: 0; frame id: 120
Thread: 0; frame id: 130
Thread: 0; frame id: 140
Thread: 0; frame id: 150
save result to output\cam1_9.mp4
------------------ Inference Time Info ----------------------
total_time(ms): 150.2, img_num: 109
video_action time(ms): 60.1; per frame average time(ms): 0.5513761467889908
average latency time(ms): 1.38, QPS: 725.699068
针对3③.python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml --video_file=z:/cam1_9.mp4 --device=gpu
,完整的打印信息如下,我运行的正常吗?比如它默认申请8G显存,我只有6G,会有影响吗?可以手动设置申请的显存大小吗?
----------- Running Arguments -----------
ATTR:
batch_size: 8
enable: false
model_dir: https://bj.bcebos.com/v1/paddledet/models/pipeline/PPLCNet_x1_0_person_attribute_945_infer.zip
DET:
batch_size: 1
model_dir: https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip
ID_BASED_CLSACTION:
batch_size: 8
display_frames: 80
enable: false
model_dir: https://bj.bcebos.com/v1/paddledet/models/pipeline/PPHGNet_tiny_calling_halfbody.zip
skip_frame_num: 2
threshold: 0.8
ID_BASED_DETACTION:
batch_size: 8
display_frames: 80
enable: false
model_dir: https://bj.bcebos.com/v1/paddledet/models/pipeline/ppyoloe_crn_s_80e_smoking_visdrone.zip
skip_frame_num: 2
threshold: 0.6
KPT:
batch_size: 8
model_dir: https://bj.bcebos.com/v1/paddledet/models/pipeline/dark_hrnet_w32_256x192.zip
MOT:
batch_size: 1
enable: false
model_dir: https://bj.bcebos.com/v1/paddledet/models/pipeline/mot_ppyoloe_l_36e_pipeline.zip
skip_frame_num: -1
tracker_config: deploy/pipeline/config/tracker_config.yml
REID:
batch_size: 16
enable: false
model_dir: https://bj.bcebos.com/v1/paddledet/models/pipeline/reid_model.zip
SKELETON_ACTION:
batch_size: 1
coord_size:
- 384
- 512
display_frames: 80
enable: false
max_frames: 50
model_dir: https://bj.bcebos.com/v1/paddledet/models/pipeline/STGCN.zip
VIDEO_ACTION:
batch_size: 1
enable: true
frame_len: 8
model_dir: https://videotag.bj.bcebos.com/PaddleVideo-release2.3/ppTSM_fight.zip
sample_freq: 7
short_size: 340
target_size: 320
attr_thresh: 0.5
crop_thresh: 0.5
kpt_thresh: 0.2
visual: true
warmup_frame: 50
------------------------------------------
VideoAction Recognition enabled
DET model dir: C:\Users\Abc/.cache/paddle/infer_weights\mot_ppyoloe_l_36e_pipeline
mot_model_dir model_dir: C:\Users\Abc/.cache/paddle/infer_weights\mot_ppyoloe_l_36e_pipeline
KPT model dir: C:\Users\Abc/.cache/paddle/infer_weights\dark_hrnet_w32_256x192
VIDEO_ACTION model dir: C:\Users\Abc/.cache/paddle/infer_weights\ppTSM
E1105 11:49:36.710119 23412 analysis_predictor.cc:2137] Allocate too much memory for the GPU memory pool, assigned 8000 MB
E1105 11:49:36.710119 23412 analysis_predictor.cc:2140] Try to shrink the value by setting AnalysisConfig::EnableUseGpu(...)
e[1me[35m--- Running analysis [ir_graph_build_pass]e[0m
I1105 11:49:36.738330 23412 executor.cc:184] Old Executor is Running.
e[1me[35m--- Running analysis [ir_analysis_pass]e[0m
e[32m--- Running IR pass [map_op_to_another_pass]e[0m
I1105 11:49:36.824437 23412 fuse_pass_base.cc:59] --- detected 55 subgraphs
e[32m--- Running IR pass [is_test_pass]e[0m
e[32m--- Running IR pass [simplify_with_basic_ops_pass]e[0m
e[32m--- Running IR pass [delete_quant_dequant_linear_op_pass]e[0m
e[32m--- Running IR pass [delete_weight_dequant_linear_op_pass]e[0m
e[32m--- Running IR pass [constant_folding_pass]e[0m
e[32m--- Running IR pass [silu_fuse_pass]e[0m
e[32m--- Running IR pass [conv_bn_fuse_pass]e[0m
I1105 11:49:36.929447 23412 fuse_pass_base.cc:59] --- detected 55 subgraphs
e[32m--- Running IR pass [conv_eltwiseadd_bn_fuse_pass]e[0m
e[32m--- Running IR pass [embedding_eltwise_layernorm_fuse_pass]e[0m
e[32m--- Running IR pass [multihead_matmul_fuse_pass_v2]e[0m
e[32m--- Running IR pass [vit_attention_fuse_pass]e[0m
e[32m--- Running IR pass [fused_multi_transformer_encoder_pass]e[0m
e[32m--- Running IR pass [fused_multi_transformer_decoder_pass]e[0m
e[32m--- Running IR pass [fused_multi_transformer_encoder_fuse_qkv_pass]e[0m
e[32m--- Running IR pass [fused_multi_transformer_decoder_fuse_qkv_pass]e[0m
e[32m--- Running IR pass [multi_devices_fused_multi_transformer_encoder_pass]e[0m
e[32m--- Running IR pass [multi_devices_fused_multi_transformer_encoder_fuse_qkv_pass]e[0m
e[32m--- Running IR pass [multi_devices_fused_multi_transformer_decoder_fuse_qkv_pass]e[0m
e[32m--- Running IR pass [fuse_multi_transformer_layer_pass]e[0m
e[32m--- Running IR pass [gpu_cpu_squeeze2_matmul_fuse_pass]e[0m
e[32m--- Running IR pass [gpu_cpu_reshape2_matmul_fuse_pass]e[0m
e[32m--- Running IR pass [gpu_cpu_flatten2_matmul_fuse_pass]e[0m
e[32m--- Running IR pass [gpu_cpu_map_matmul_v2_to_mul_pass]e[0m
I1105 11:49:38.095309 23412 fuse_pass_base.cc:59] --- detected 1 subgraphs
e[32m--- Running IR pass [gpu_cpu_map_matmul_v2_to_matmul_pass]e[0m
e[32m--- Running IR pass [matmul_scale_fuse_pass]e[0m
e[32m--- Running IR pass [multihead_matmul_fuse_pass_v3]e[0m
e[32m--- Running IR pass [gpu_cpu_map_matmul_to_mul_pass]e[0m
e[32m--- Running IR pass [fc_fuse_pass]e[0m
I1105 11:49:38.129885 23412 fuse_pass_base.cc:59] --- detected 1 subgraphs
e[32m--- Running IR pass [fc_elementwise_layernorm_fuse_pass]e[0m
e[32m--- Running IR pass [conv_elementwise_add_act_fuse_pass]e[0m
e[32m--- Running IR pass [conv_elementwise_add2_act_fuse_pass]e[0m
e[32m--- Running IR pass [conv_elementwise_add_fuse_pass]e[0m
I1105 11:49:38.204499 23412 fuse_pass_base.cc:59] --- detected 55 subgraphs
e[32m--- Running IR pass [transpose_flatten_concat_fuse_pass]e[0m
e[32m--- Running IR pass [transfer_layout_pass]e[0m
e[32m--- Running IR pass [transfer_layout_elim_pass]e[0m
e[32m--- Running IR pass [auto_mixed_precision_pass]e[0m
e[32m--- Running IR pass [identity_op_clean_pass]e[0m
e[32m--- Running IR pass [inplace_op_var_pass]e[0m
I1105 11:49:38.212951 23412 fuse_pass_base.cc:59] --- detected 2 subgraphs
e[1me[35m--- Running analysis [ir_params_sync_among_devices_pass]e[0m
I1105 11:49:38.214633 23412 ir_params_sync_among_devices_pass.cc:51] Sync params from CPU to GPU
e[1me[35m--- Running analysis [adjust_cudnn_workspace_size_pass]e[0m
e[1me[35m--- Running analysis [inference_op_replace_pass]e[0m
e[1me[35m--- Running analysis [save_optimized_model_pass]e[0m
e[1me[35m--- Running analysis [ir_graph_to_program_pass]e[0m
I1105 11:49:38.334097 23412 analysis_predictor.cc:2080] ======= ir optimization completed =======
I1105 11:49:38.334097 23412 naive_executor.cc:200] --- skip [feed], feed -> data_batch_0
I1105 11:49:38.335599 23412 naive_executor.cc:200] --- skip [linear_2.tmp_1], fetch -> fetch
video fps: 30, frame_count: 160
Thread: 0; frame id: 0
Thread: 0; frame id: 10
Thread: 0; frame id: 20
Thread: 0; frame id: 30
Thread: 0; frame id: 40
W1105 11:49:40.078011 23412 gpu_resources.cc:119] Please NOTE: device: 0, GPU Compute Capability: 8.6, Driver API Version: 12.3, Runtime API Version: 11.8
W1105 11:49:40.078011 23412 gpu_resources.cc:164] device: 0, cuDNN Version: 8.9.
I1105 11:49:40.078011 23412 program_interpreter.cc:243] New Executor is Running.
video_action_res: {'class': 1, 'score': 0.5839483}
Thread: 0; frame id: 50
Thread: 0; frame id: 60
Thread: 0; frame id: 70
Thread: 0; frame id: 80
Thread: 0; frame id: 90
Thread: 0; frame id: 100
video_action_res: {'class': 1, 'score': 0.7190255}
Thread: 0; frame id: 110
Thread: 0; frame id: 120
Thread: 0; frame id: 130
Thread: 0; frame id: 140
Thread: 0; frame id: 150
save result to output\cam1_9.mp4
------------------ Inference Time Info ----------------------
total_time(ms): 216.6, img_num: 109
video_action time(ms): 80.10000000000001; per frame average time(ms): 0.734862385321101
average latency time(ms): 1.99, QPS: 503.231764
针对4.之前看到rtsp的方式了,不过发issue时忘了这点,刚试了下报错(单独用opencv连接rtsp,可以获取到摄像头画面),我没修改deploy/pipeline/config/examples/infer_cfg_human_attr.yml
中的内容,python deploy/pipeline/pipeline.py --config deploy/pipeline/config/examples/infer_cfg_human_attr.yml -o visual=False --rtsp rtsp://abc:abc@192.168.0.1:554/Streaming/Channels/101 --device=gpu
Traceback (most recent call last):
File "deploy/pipeline/pipeline.py", line 1321, in <module>
main()
File "deploy/pipeline/pipeline.py", line 1303, in main
cfg = merge_cfg(FLAGS) # use command params to update config
File "Z:\PaddleDetection-develop\deploy\pipeline\cfg_utils.py", line 212, in merge_cfg
pred_config = merge_opt(pred_config, args_dict)
File "Z:\PaddleDetection-develop\deploy\pipeline\cfg_utils.py", line 202, in merge_opt
for sub_k, sub_v in value.items():
AttributeError: 'bool' object has no attribute 'items'
还试了python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml -o visual=False --rtsp rtsp://abc:abc@192.168.0.1:554/Streaming/Channels/101 --device=gpu
,也报错,deploy/pipeline/config/infer_cfg_pphuman.yml
中只有VIDEO_ACTION
是True,其他是False
Traceback (most recent call last):
File "deploy/pipeline/pipeline.py", line 1321, in <module>
main()
File "deploy/pipeline/pipeline.py", line 1303, in main
cfg = merge_cfg(FLAGS) # use command params to update config
File "Z:\PaddleDetection-develop\deploy\pipeline\cfg_utils.py", line 212, in merge_cfg
pred_config = merge_opt(pred_config, args_dict)
File "Z:\PaddleDetection-develop\deploy\pipeline\cfg_utils.py", line 202, in merge_opt
for sub_k, sub_v in value.items():
AttributeError: 'bool' object has no attribute 'items'
问题确认 Search before asking
Bug组件 Bug Component
Inference
Bug描述 Describe the Bug
新装的PaddleDetection环境,下载仓库develop分支zip代码后安装的。想试一下打架识别,本地已准备了一个mp4文件。参考文档中的用法,运行后报错
python deploy/pipeline/pipeline.py --config deploy/pipeline/config/infer_cfg_pphuman.yml --video_file=z:/cam1_9.mp4 --device=gpu