ZoneMinder / zmeventnotification

Machine Learning powered Secure Websocket & MQTT based ZoneMinder event notification server
412 stars 128 forks source link

Monitor overrides for detection_sequence not working #399

Closed forkineye closed 3 years ago

forkineye commented 3 years ago

Configuration

I'm running dlandon/zoneminder.machine.learning Event Server version: 6.1.23 Hooks version (if you are using Object Detection): app - 6.1.23, pyzm - 0.3.48 Are you using MLAPI? (Y/N): No The version of ZoneMinder you are using: 1.36.5

What is the nature of your issue

Monitor overrides in objectconfig.ini being applied but not working for detection_sequence

Details

I have detection_sequence overrides for every monitor, and only monitor-3 should have alpr. The log file shows it picking up the override, however in execution it is still running an alpr check. I originally had object,face under [general] and object,face,alpr under [monitor-3], but that didn't work either. Tried have nothing under [general] and setting each monitor explicitly and still no joy. Is this a bug or am I just doing something wrong?

objectconfig.ini

[general]
version=1.2
cpu_max_processes=3
tpu_max_processes=1
gpu_max_processes=1
cpu_max_lock_wait=100
tpu_max_lock_wait=100
gpu_max_lock_wait=100
pyzm_overrides={'log_level_debug':5}
secrets = /etc/zm/secrets.ini
portal=!ZM_PORTAL
user=!ZM_USER
password=!ZM_PASSWORD
api_portal=!ZM_API_PORTAL
allow_self_signed=yes
match_past_detections=no
past_det_max_diff_area=5%
max_detection_size=90%
detection_mode=all
base_data_path=/var/lib/zmeventnotification
frame_id=bestmatch
resize=800
delete_after_analyze=yes
write_debug_image=no
write_image_to_zm=yes
show_percent=yes
poly_color=(255,255,255)
poly_thickness=2
only_triggered_zm_zones=no
wait=5

[animation]
create_animation=no
animation_types='mp4,gif'
animation_width=640
animation_retry_sleep=15
animation_max_tries=4
fast_gif=no

[remote]
ml_user=!ML_USER
ml_password=!ML_PASSWORD

[object]
object_detection_pattern=(person|car|motorbike|bus|truck|boat)
object_min_confidence=0.3
object_framework=coral_edgetpu
object_processor=tpu
object_weights={{base_data_path}}/models/coral_edgetpu/ssd_mobilenet_v2_coco_quant_postprocess_edgetpu.tflite
object_labels={{base_data_path}}/models/coral_edgetpu/coco_indexed.names

tpu_object_weights_mobiledet={{base_data_path}}/models/coral_edgetpu/ssdlite_mobiledet_coco_qat_postprocess_edgetpu.tflite
tpu_object_weights_mobilenet={{base_data_path}}/models/coral_edgetpu/ssd_mobilenet_v2_coco_quant_postprocess_edgetpu.tflite
tpu_object_labels={{base_data_path}}/models/coral_edgetpu/coco_indexed.names
tpu_object_framework=coral_edgetpu
tpu_object_processor=tpu
tpu_min_confidence=0.6

yolo4_object_weights={{base_data_path}}/models/yolov4/yolov4.weights
yolo4_object_labels={{base_data_path}}/models/yolov4/coco.names
yolo4_object_config={{base_data_path}}/models/yolov4/yolov4.cfg
yolo4_object_framework=opencv
yolo4_object_processor=gpu

yolo3_object_weights={{base_data_path}}/models/yolov3/yolov3.weights
yolo3_object_labels={{base_data_path}}/models/yolov3/coco.names
yolo3_object_config={{base_data_path}}/models/yolov3/yolov3.cfg
yolo3_object_framework=opencv
yolo3_object_processor=gpu
tinyyolo_object_config={{base_data_path}}/models/tinyyolov4/yolov4-tiny.cfg
tinyyolo_object_weights={{base_data_path}}/models/tinyyolov4/yolov4-tiny.weights
tinyyolo_object_labels={{base_data_path}}/models/tinyyolov4/coco.names
tinyyolo_object_framework=opencv
tinyyolo_object_processor=gpu

[face]
face_detection_pattern=.*
known_images_path={{base_data_path}}/known_faces
unknown_images_path={{base_data_path}}/unknown_faces
save_unknown_faces=yes
save_unknown_faces_leeway_pixels=100
face_detection_framework=dlib
face_num_jitters=1
face_model=cnn
face_upsample_times=1
face_recog_dist_threshold=0.6
face_train_model=cnn
unknown_face_name=face

[alpr]
alpr_detection_pattern=.*
alpr_use_after_detection_only=yes
alpr_api_type=cloud
alpr_service=plate_recognizer
alpr_key=!PLATEREC_ALPR_KEY
platerec_stats=yes
platerec_min_dscore=0.1
platerec_min_score=0.2
openalpr_cmdline_binary=alpr
openalpr_cmdline_params=-j -d
openalpr_cmdline_min_confidence=0.3

# Driveway
[monitor-1]
detection_sequence=object,face

# Porch
[monitor-2]
detection_sequence=object,face

# Street
[monitor-3]
detection_sequence=object,face,alpr

# Living Room
[monitor-4]
detection_sequence=object,face

[ml]
use_sequence = yes
disable_locks= no
stream_sequence = {
        'frame_strategy': 'most_models',
        'frame_set': 'snapshot,alarm',
        'contig_frames_before_error': 5,
        'max_attempts': 3,
        'sleep_between_attempts': 4,
        'resize':800

    }

ml_sequence= {
        'general': {
            'model_sequence': 'object,face,alpr',
            'disable_locks': '{{disable_locks}}',
            'match_past_detections': '{{match_past_detections}}',
            'past_det_max_diff_area': '5%',
            'car_past_det_max_diff_area': '10%',
            #'ignore_past_detection_labels': ['dog', 'cat']

        },
        'object': {
            'general':{
                'pattern':'{{object_detection_pattern}}',
                'same_model_sequence_strategy': 'first' # also 'most', 'most_unique's
            },
            'sequence': [{
                #First run on TPU with higher confidence
                'name': 'TPU object detection',
                'enabled': 'no',
                'object_weights':'{{tpu_object_weights_mobiledet}}',
                'object_labels': '{{tpu_object_labels}}',
                'object_min_confidence': {{tpu_min_confidence}},
                'object_framework':'{{tpu_object_framework}}',
                'tpu_max_processes': {{tpu_max_processes}},
                'tpu_max_lock_wait': {{tpu_max_lock_wait}},
                'max_detection_size':'{{max_detection_size}}'

            },
            {
                # YoloV4 on GPU if TPU fails (because sequence strategy is 'first')
                'name': 'YoloV4 GPU/CPU',
                'enabled': 'yes', # don't really need to say this explictly
                'object_config':'{{yolo4_object_config}}',
                'object_weights':'{{yolo4_object_weights}}',
                'object_labels': '{{yolo4_object_labels}}',
                'object_min_confidence': {{object_min_confidence}},
                'object_framework':'{{yolo4_object_framework}}',
                'object_processor': '{{yolo4_object_processor}}',
                'gpu_max_processes': {{gpu_max_processes}},
                'gpu_max_lock_wait': {{gpu_max_lock_wait}},
                'cpu_max_processes': {{cpu_max_processes}},
                'cpu_max_lock_wait': {{cpu_max_lock_wait}},
                'max_detection_size':'{{max_detection_size}}'

            }]
        },
        'face': {
            'general':{
                'pattern': '{{face_detection_pattern}}',                
                'same_model_sequence_strategy': 'union' # combines all outputs of this sequence
            },
            'sequence': [
            {
                'name': 'TPU face detection',
                'enabled': 'no',
                'face_detection_framework': 'tpu',
                'face_weights':'/var/lib/zmeventnotification/models/coral_edgetpu/ssd_mobilenet_v2_face_quant_postprocess_edgetpu.tflite',
                'face_min_confidence': 0.3,

            },
            {
                'name': 'DLIB based face recognition',
                'enabled': 'yes',
                #'pre_existing_labels': ['face'], # If you use TPU detection first, we can run this ONLY if TPU detects a face first
                'save_unknown_faces':'{{save_unknown_faces}}',
                'save_unknown_faces_leeway_pixels':{{save_unknown_faces_leeway_pixels}},
                'face_detection_framework': '{{face_detection_framework}}',
                'known_images_path': '{{known_images_path}}',
                'unknown_images_path': '{{unknown_images_path}}',
                'face_model': '{{face_model}}',
                'face_train_model': '{{face_train_model}}',
                'face_recog_dist_threshold': '{{face_recog_dist_threshold}}',
                'face_num_jitters': '{{face_num_jitters}}',
                'face_upsample_times':'{{face_upsample_times}}',
                'gpu_max_processes': {{gpu_max_processes}},
                'gpu_max_lock_wait': {{gpu_max_lock_wait}},
                'cpu_max_processes': {{cpu_max_processes}},
                'cpu_max_lock_wait': {{cpu_max_lock_wait}},
                'max_size':800
            }]
        },

        'alpr': {
            'general':{
                'same_model_sequence_strategy': 'first',
                'pre_existing_labels':['car', 'motorbike', 'bus', 'truck', 'boat'],
                'pattern': '{{alpr_detection_pattern}}'

            },
            'sequence': [{
                'name': 'Platerecognizer cloud',
                'enabled': 'yes',
                'alpr_api_type': '{{alpr_api_type}}',
                'alpr_service': '{{alpr_service}}',
                'alpr_key': '{{alpr_key}}',
                'platrec_stats': '{{platerec_stats}}',
                'platerec_min_dscore': {{platerec_min_dscore}},
                'platerec_min_score': {{platerec_min_score}},
                'max_size':1600,
                #'platerec_payload': {
                #'regions':['us'],
                #'camera_id':12,
                #},
                #'platerec_config': {
                #    'region':'strict',
                #    'mode': 'fast'
                #}
            }]
        }
    }

Debug Logs

Truncated, can post full if needed

06/25/21 11:53:17 zmesdetect_m4[83170] DBG4 utils.py:456 [Now checking for monitor overrides]
06/25/21 11:53:17 zmesdetect_m4[83170] DBG4 utils.py:474 [[monitor-4] overrides key:detection_sequence with value:object,face]
...
06/25/21 11:53:23 zmesdetect_m4[83170] DBG1 detect_sequence.py:639 [Sequence of detection types to execute: ['object', 'face', 'alpr']]
...
06/25/21 11:53:36 zmesdetect_m4[83170] DBG1 detect_sequence.py:644 [============ Frame: snapshot Running alpr detection type in sequence ==================]
...
06/25/21 11:53:36 zmesdetect_m4[83170] DBG1 detect_sequence.py:644 [============ Frame: alarm Running object detection type in sequence ==================]
06/25/21 11:53:36 zmesdetect_m4[83170] DBG3 detect_sequence.py:664 [object has a same_model_sequence strategy of first]
06/25/21 11:53:36 zmesdetect_m4[83170] DBG3 detect_sequence.py:676 [--------- Frame:alarm Running variation: #1 -------------]
forkineye commented 3 years ago

Further research led me to the [ml] config section. I was going off the monitor-999 example in the config which was not applicable since use_sequence was enabled. After creating my own key and setting it for model_sequence, I resolved my issue. I did notice an error in the following section of the docs though: https://github.com/pliablepixels/zmeventnotification/blame/master/docs/guides/hooks.rst#L334. If the key value is quoted, it will generate nested quotes in processing and fail. For example:

06/25/21 14:12:45 zmesdetect_m1[93393] DBG2 zm_detect.py:329 [using ml_sequence]
06/25/21 14:12:45 zmesdetect_m1[93393] FAT zm_detect.py:554 [Unrecoverable error:invalid syntax (<unknown>, line 3) Traceback:Traceback (most recent call last):
  File "/var/lib/zmeventnotification/bin/zm_detect.py", line 549, in <module>
    main_handler()
  File "/var/lib/zmeventnotification/bin/zm_detect.py", line 333, in main_handler
    ml_options = ast.literal_eval(ml_options)
  File "/usr/lib/python3.8/ast.py", line 59, in literal_eval
    node_or_string = parse(node_or_string, mode='eval')
  File "/usr/lib/python3.8/ast.py", line 47, in parse
    return compile(source, filename, mode, flags,
  File "<unknown>", line 3
    'model_sequence': ''object,face'',
                        ^
SyntaxError: invalid syntax
]

Here's the config that worked for me:

# Driveway
[monitor-1]
model_seq=object,face

# Porch
[monitor-2]
model_seq=object,face

# Street
[monitor-3]
model_seq=object,face,alpr

# Living Room
[monitor-4]
model_seq=object,face

[ml]
...
ml_sequence= {
        'general': {
            'model_sequence': '{{model_seq}}',
...

So, my issue is fixed, but I'll leave this open in case you want to update the example and doc. Thanks for the all the work you've put in on this project.

pliablepixels commented 3 years ago

Thanks fixed. Curious about your monitor-999 comment and use_sequence. How are they related?

forkineye commented 3 years ago

In the monitor-999 example, there is detection_sequece=object,alpr - https://github.com/pliablepixels/zmeventnotification/blob/master/hook/objectconfig.ini#L368

However, model_sequence is set to 'object,face,alpr' and since use_sequence is enabled, the detection sequence used for monitor-999 would be ignored - https://github.com/pliablepixels/zmeventnotification/blob/master/hook/objectconfig.ini#L403