ZoneMinder / zmeventnotification

Machine Learning powered Secure Websocket & MQTT based ZoneMinder event notification server
412 stars 128 forks source link

Error parsing objectconfig.ini file #280

Closed preeny95 closed 4 years ago

preeny95 commented 4 years ago

Before you create an issue, please make sure you have read the README. If you are asking about the object detection part, I don't provide support for it unless you've tried hard enough. Event Server version 5.15.5 Hooks version (If your question is about the machine learning looks) You can get the version by doing: python -c "import zmes_hook_helpers as h; print (h.version)" The version of ZoneMinder you are using: 1.34.16 What is the nature of your issue Hello! I am currently running Zoneminder inside Docker using the dlandon/zoneminder container. However I have started to get an error whilst parsing the objdetect.ini config file since updating to the latest version of zmeventnotification. I have attached my debug logs below. I have checked inside of the container and the config file is mounted correctly and looks correct to me. Here are the first few lines of my config

# Configuration file for object detection
# NOTE: ALL parameters here can be overriden
# on a per monitor basis if you want. Just
# duplicate it inside the correct [monitor-<num>] section
[general]
# This is an optional file
# If specified, you can specify tokens with secret values in that file
# and onlt refer to the tokens in your main config file
secrets = /etc/zm/secrets.ini
# base data path for various files the ES+OD needs
# we support in config variable substitution as well
base_data_path=/var/lib/zmeventnotification

I have ensured that my config file is upto date with the current examples also. Details Describe in detail. If its a bug, please describe what is happening, what should happen and how to reproduce if its not obvious Debug Logs (if applicable)

06/29/20 10:07:01 zmesdetect_m1[3604] INF zm_detect.py:181 [---------| hook version: 5.15.5, ES version: 5.15-Docker , OpenCV version: 4.2.0|------------]
06/29/20 10:07:01 zmesdetect_m1[3604] DBG1 utils.py:284 [secret filename: /etc/zm/secrets.ini]
06/29/20 10:07:01 zmesdetect_m1[3604] DBG2 utils.py:257 [Secret token found in config: !ZM_PORTAL]
06/29/20 10:07:01 zmesdetect_m1[3604] DBG2 utils.py:257 [Secret token found in config: !ZM_API_PORTAL]
06/29/20 10:07:01 zmesdetect_m1[3604] DBG2 utils.py:257 [Secret token found in config: !ZM_USER]
06/29/20 10:07:01 zmesdetect_m1[3604] DBG2 utils.py:257 [Secret token found in config: !ZM_PASSWORD]
06/29/20 10:07:01 zmesdetect_m1[3604] DBG2 utils.py:257 [Secret token found in config: !ZM_USER]
06/29/20 10:07:01 zmesdetect_m1[3604] DBG2 utils.py:257 [Secret token found in config: !ZM_PASSWORD]
06/29/20 10:07:01 zmesdetect_m1[3604] DBG2 utils.py:257 [Secret token found in config: !ML_USER]
06/29/20 10:07:01 zmesdetect_m1[3604] DBG2 utils.py:257 [Secret token found in config: !ML_PASSWORD]
06/29/20 10:07:01 zmesdetect_m1[3604] DBG2 utils.py:257 [Secret token found in config: !PLATEREC_ALPR_KEY]
06/29/20 10:07:01 zmesdetect_m1[3604] DBG1 utils.py:307 [allowing self-signed certs to work...]
06/29/20 10:07:01 zmesdetect_m1[3604] DBG2 utils.py:61 [Getting ZM zones using https://localhost/zm/index.php/api/zones/forMonitor/1.json?user=xxx&pass=yyy]
06/29/20 10:07:01 zmesdetect_m1[3604] DBG2 utils.py:71 [Basic auth config found, associating handlers]
06/29/20 10:07:02 zmesdetect_m1[3604] ERR utils.py:348 [Error parsing config:/etc/zm/objectconfig.ini]
06/29/20 10:07:02 zmesdetect_m1[3604] ERR utils.py:349 [Error was:Expecting value: line 1 column 1 (char 0)]

Thanks!

pliablepixels commented 4 years ago

How did you update to the latest ES/hooks version?

preeny95 commented 4 years ago

https://github.com/dlandon/zoneminder/blob/master/init/40_firstrun.sh#L25 - As part of the init script for the container, the author has the latest version as a tgz file which is pulled in.

pliablepixels commented 4 years ago
  1. Can you post your complete objectconfig?
  2. Looks like the error kicks in right after basic auth, which you have enabled. What happens if you disable basic auth?
preeny95 commented 4 years ago

Sure thing :)



# NOTE: ALL parameters here can be overriden
# on a per monitor basis if you want. Just
# duplicate it inside the correct [monitor-<num>] section

[general]
# This is an optional file
# If specified, you can specify tokens with secret values in that file
# and onlt refer to the tokens in your main config file
secrets = /etc/zm/secrets.ini

# base data path for various files the ES+OD needs
# we support in config variable substitution as well
base_data_path=/var/lib/zmeventnotification

# It seems certain systems don't follow regular
# ZM conventions on install paths. This may cause 
# problems with pyzm that the hooks use to do logging
# Look at https://pyzm.readthedocs.io/en/latest/source/pyzm.html#pyzm.ZMLog.init for parameters. Default is "{}"
# You can also use this to control logging irrespective of ZM log settings
#pyzm_overrides = {'conf_path':'/etc/zm'}
pyzm_overrides={'log_level_debug':2}

# base path where ZM config files reside
# this is needed by pyzm especially if your paths are different
# default is /etc/zm
base_zm_conf_path=/etc/zm

# portal/user/password are needed if you plan on using ZM's legacy
# auth mechanism to get images
portal=!ZM_PORTAL
user=!ZM_USER
password=!ZM_PASSWORD

# api portal is needed if you plan to use tokens to get images
# requires ZM 1.33 or above
api_portal=!ZM_API_PORTAL

allow_self_signed=yes
# if yes, last detection will be stored for monitors
# and bounding boxes that match, along with labels
# will be discarded for new detections. This may be helpful
# in getting rid of static objects that get detected
# due to some motion. 
match_past_detections=yes
# The max difference in area between the objects if match_past_detection is on
# can also be specified in px like 300px. Default is 5%. Basically, bounding boxes of the same
# object can slightly differ ever so slightly between detection. Contributor @neillbell put in this PR
# to calculate the difference in areas and based on his tests, 5% worked well. YMMV. Change it if needed.
past_det_max_diff_area=5%

# sequence of models to run for detection

models=all
# if all, then we will loop through all models
# if first then the first success will break out
detection_mode=first

# If you need basic auth to access ZM 
basic_user=!ZM_USER
basic_password=!ZM_PASSWORD
store_frame_in_zm=yes

# this is the global detection pattern used for all monitors.
# choose any set of classes from here https://github.com/pjreddie/darknet/blob/master/data/coco.names
# for everything, make it .*
detect_pattern=(person|car)
#detect_pattern=.*

# global settings for 
# bestmatch, alarm, snapshot OR a specific frame ID
frame_id=bestmatch

# Typically best match means it will first try alarm 
# and then snapshot. If you want it the reverse way, 
# make the order 's,a'. Don't get imaginative here -
# 's,a' is the only thing it understands. Everything else
# means alarm then snapshot.
#bestmatch_order = 's,a'

# this is the to resize the image before analysis is done
resize=1200
# set to yes, if you want to remove images after analysis
# setting to yes is recommended to avoid filling up space
# keep to no while debugging/inspecting masks
# Note this does NOT delete debug images later
delete_after_analyze=no

# If yes, will write an image called <filename>-bbox.jpg as well
# which contains the bounding boxes. This has NO relation to 
# write_image_to_zm 
# Typically, if you enable delete_after_analyze you may
# also want to set  write_debug_image to no. 
write_debug_image=no

# if yes, will write an image with bounding boxes
# this needs to be yes to be able to write a bounding box
# image to ZoneMinder that is visible from its console
write_image_to_zm=yes

# Adds percentage to detections
# hog/face shows 100% always
show_percent=yes

# color to be used to draw the polygons you specified
poly_color=(255,255,255)

# If yes, will import zones automatically from monitors
#import_zm_zones=no

# If yes, will match object detections only in areas
# that ZM recorded motion. Note that the ES will only know
# the initial zones motion was triggered in before an alarm 
# was raised. If ZM adds more zones later in the course of the event,
# the ES will NOT know

#only_triggered_zm_zones=no

# This section gives you an option to get brief animations 
# of the event, delivered as part of the push notification to mobile devices
# Animations are created only if an object is detected

[animation]
# Seems like GIF/MP4 animations only
# work in IOS. Too bad.

# NOTE: Animation ONLY works with ZM 1.35 master as of Mar 16, 2020
# You also require zmNinja 1.3.91 or above
# If you are not running that version, animation will not work
# Animation frames will be created, but they won't be pushed to your device

# If yes, object detection will attempt to create 
# a short GIF file around the object detection frame
# that can be sent via push notifications for instant playback
# Note this required additional software support. Default:no
create_animation=no

# Format of animation burst
# valid options are "mp4", "gif", "mp4,gif"
# Note that gifs will be of a shorter duration
# as they take up much more disk space than mp4
# Note that if you use mp4, the thumbnail that shows 
# with push notifications may look transparent. My guess
# is this is related to how the video is being formed
# in ZM as it is a partial video when we process it

# Note that if you use mp4, you need to change the picture_url
# in zmeventnotification.ini to objdetect_mp4. When you use objdetect,
# a GIF file is checked and if not, the image is returned. MP4 is not
# returned, as they are not playable inside an HTML img tag

animation_types='gif'

# default width of animation image. Be cautious when you increase this
# most mobile platforms give a very brief amount of time (in seconds) 
# to download the image.
# Given your ZM instance will be serving the image, it will anyway be slow
# Making the total animation size bigger resulted in the notification not 
# getting an image at all (timed out)
animation_width=640

# animation_retry_sleep refers to how long to wait before trying to grab
# frame information if it failed. animation_max_tries defines how many times it 
# will try and retrieve frames before it gives up
animation_retry_sleep=15
animation_max_tries=3

## Monitor specific settings
#
# - Format:  [monitor-<mid>]
#
# Parameters:
# polygon areas where object detection will be done.
# You can name them anything except the keywords defined in the optional
# params below. You can put as many polygons as you want per [monitor-<mid>]
# (see examples).
#
# detect_pattern: overrides the detection patterns used for this monitor.
#
# Examples:

[monitor-8]
# my driveway
match_past_detections=no
wait=5
detect_pattern=(person|car|motorbike|bus|truck|boat)

#alpr_pattern=^(.*x11)
#delete_after_analyze=no
#detect_pattern=.*
#import_zm_zones=yes
my_driveway_perimeter=306,356 1003,341 1074,683 154,715
# use license plate recognition for my driveway
# see alpr section later for more data needed
resize=no
models=yolo,alpr
# tiny switches to tiny yolo weights, instead of full Yolo. Much faster, but less accurate
#yolo_type=tiny

[monitor-10]
# my front lawn
# here we want anything except potted plant
# exclusion in regular expressions is not
# as straightforward as you may think, so 
# follow this pattern
# detect_pattern = ^(?!object1|object2|objectN)
# the characters in front implement what is 
# called a negative look ahead

detect_pattern=^(?!potted plant|pottedplant|bench|broccoli)
#detect_pattern=.*

# local model overrides global
models=yolo

# setting import_zm_zones to yes will import ZM defined zones
#import_zm_zones=yes

[monitor-5]
# my basement
detect_pattern=(person)
#detect_pattern=.*
#poly_color=(255,0,0)
#detect_pattern=^(?!chair|bed)
param=219,304 1113,278 1066,863 177,852
models=yolo,face

[monitor-6]
# deck
detect_pattern=^(?!chair|table|bench|bird|bicycle|frisbee)
#detect_pattern=^(?!chair|table|bench|bird)
models=yolo
#yolo_type=tiny
boundary=100,100 2988,10 2988,2220 10,2220

[monitor-2]
#doorbell
detect_pattern=(person)
#detect_pattern=.*
# try face, if it works, don't do yolo
detection_mode=first
models=face,yolo
frame_id=bestmatch
# try diff. sizes. In my case, 600 was enough
#resize=600
# My doorbell camera needs more accurate face detection
# cnn did a much better job than HOG, but its _much_ slower
face_model=cnn
face_train_model=cnn
face_recog_dist_threshold=0.6
match_past_detections=no

#if you hard code a frame, you need to make sure it is created
#before we acess it. wait (sec) helps
#frame_id=32
#wait=3

#[monitor-4]
# detect_pattern=(cat|dog)
# kitchen_door=313,221 392,210 418,592 367,659

# No 'detect_pattern', global value would be used.
# [monitor-7]
# entrance_door=313,221 392,210 418,592 367,659

# Machine learning options that are not specific to a model
[ml]

# Starting version 4.2 of OpenCV, the DNN models support CUDA
# If you have compiled OpenCV 4.2 with CUDA support correctly
# set this to yes. Note that if you have just installed a package
# chances are it is not properly set up with CUDA. It is much better
# you compile OpenCV from source (and uninstall any opencv packages you
# installed via pip or apt-get)
# Read https://www.pyimagesearch.com/2020/02/03/how-to-use-opencvs-dnn-module-with-nvidia-gpus-cuda-and-cudnn/ on how to do it right.
# Play special attention to putting in the right CUDA_ARCH_BIN value that
# matches your GPU or you'll face "invalid device errors in make_policy"
# while trying to actually run it (compile will work fine)

#use_opencv_dnn_cuda=yes

# You can now run the machine learning code on a different server
# This frees up your ZM server for other things
# To do this, you need to setup https://github.com/pliablepixels/mlapi
# on your desired server and confiure it with a user. See its instructions
# once set up, you can choose to do object/face recognition via that 
# external serer

# URL that will be used
#ml_gateway=http://192.168.1.21:5000/api/v1

# If you enable ml_gateway, and it is down
# you can set ml_fallback_local to yes
# if you want to instantiate local object detection
# on gateway failure. Default is no
#ml_fallback_local=yes

# API/password for remote gateway
ml_user=!ML_USER
ml_password=!ML_PASSWORD

# config files for yolo
[yolo]
yolo_type=full
#yolo_type=tiny

#yolo_min_confidence=0.5
yolo_min_confidence=0.3

# For Yolo full
config={{base_data_path}}/models/yolov3/yolov3.cfg
weights={{base_data_path}}/models/yolov3/yolov3.weights
labels={{base_data_path}}/models/yolov3/yolov3_classes.txt

# FOR CSPN. Note that model name is yolo
#config={{base_data_path}}/models/cspn/csresnext50-panet-spp-original-optimal.cfg
#weights={{base_data_path}}/models/cspn/csresnext50-panet-spp-original-optimal_final.weights
#labels={{base_data_path}}/models/cspn/coco.names

# For tiny Yolo
tiny_config={{base_data_path}}/models/tinyyolo/yolov3-tiny.cfg
tiny_weights={{base_data_path}}/models/tinyyolo/yolov3-tiny.weights
tiny_labels={{base_data_path}}/models/tinyyolo/yolov3-tiny.txt

# config params for HOG
[hog]
stride=(4,4)
padding=(8,8)
scale=1.05
mean_shift=-1

[face]
# this directly will be where you store known images on a per directory basis
known_images_path={{base_data_path}}/known_faces

# if yes, then unknown faces will be stored and you can analyze them later
# and move to known_faces and retrain
save_unknown_faces=yes

# How many pixels to extend beyond the face for a better perspective
save_unknown_faces_leeway_pixels=50

# this directly is where zm_detect will store faces it could not identify
# (if save_unknown_faces is yes). You can then inspect this folder later, 
# and copy unknown faces to the right places in known_faces and retrain
unknown_images_path={{base_data_path}}/unknown_faces

# read https://github.com/ageitgey/face_recognition/wiki/Face-Recognition-Accuracy-Problems
# read https://github.com/ageitgey/face_recognition#automatically-find-all-the-faces-in-an-image
# and play around

# quick overview: 
# num_jitters is how many times to distort images 
# upsample_times is how many times to upsample input images (for small faces, for example)
# model can be hog or cnn. cnn may be more accurate, but I haven't found it to be 

face_num_jitters=1
face_model=hog
face_upsample_times=1

# This is maximum distance of the face under test to the closest matched
# face cluster. The larger this distance, larger the chances of misclassification.
#
face_recog_dist_threshold=0.6
# When we are first training the face recognition model with known faces,
# by default we use hog because we assume you will supply well lit, front facing faces
# However, if you are planning to train with profile photos or hard to see faces, you
# may want to change this to cnn. Note that this increases training time, but training only
# happens once, unless you retrain again by removing the training model
face_train_model=hog
#if a face doesn't match known names, we will detect it as 'unknown face'
# you can change that to something that suits your personality better ;-)
#unknown_face_name=invader

[alpr]

# keep this to yes. no mode is not supported today
alpr_use_after_detection_only=yes

# plate_recognizer, open_alpr, open_alpr_cmdline
alpr_service=plate_recognizer

# Many of the ALPR providers offer both a cloud version
# and local SDK version. Sometimes local SDK format differs from
# the cloud instance. Set this to local or cloud. Default cloud
alpr_api_type=cloud

# If you want to host a local SDK https://app.platerecognizer.com/sdk/
#alpr_url=https://localhost:8080/alpr
# Plate recog replace with your api key
alpr_key=!PLATEREC_ALPR_KEY
# if yes, then it will log usage statistics of the ALPR service
platerec_stats=no
# If you want to specify regions. See http://docs.platerecognizer.com/#regions-supported
#platerec_regions=['us','cn','kr']
# minimal confidence for actually detecting a plate
platerec_min_dscore=0.1
# minimal confidence for the translated text
platerec_min_score=0.2

# ----| If you are using openALPR web service |-----
#alpr_service=open_alpr
#alpr_key=!OPENALPR_ALPR_KEY

# For an explanation of params, see http://doc.openalpr.com/api/?api=cloudapi
#openalpr_recognize_vehicle=1
#openalpr_country=us
#openalpr_state=ca
# openalpr returns percents, but we convert to between 0 and 1
#openalpr_min_confidence=0.3

# ----| If you are using openALPR command line |-----

# Before you do any of this, make sure you have openALPR
# compiled and working properly as per http://doc.openalpr.com/compiling.html
# the alpr binary needs to be operational and capable of detecting plates

# Note this is not really very accurate unless you 
# have a camera directly with a good view of the palates
# the cloud based API service is far more accurate

#openalpr_cmdline_binary=alpr

# Do an alpr -help to see options, plug them in here
# like say '-j -p ca -c US' etc.
# keep the -j because its JSON

# Note that alpr_pattern is honored
# For the rest, just stuff them in the cmd line options

#openalpr_cmdline_params=-j
#openalpr_cmdline_min_confidence=0.3``` 

I'll disable basic auth now too.
preeny95 commented 4 years ago
Jun 30 13:34:58 84c042b6ed49 /zm_detect.py[9655]: INF [zmesdetect_m1] [---------| hook version: 5.15.5, ES version: 5.15-Docker , OpenCV version: 4.2.0|------------]
Jun 30 13:34:59 84c042b6ed49 /zm_detect.py[9655]: ERR [zmesdetect_m1] [Error parsing config:/etc/zm/objectconfig.ini]
Jun 30 13:34:59 84c042b6ed49 /zm_detect.py[9655]: ERR [zmesdetect_m1] [Error was:Expecting value: line 1 column 1 (char 0)]

That's the error with the config set to disable basic auth

# If you need basic auth to access ZM 
#basic_user=!ZM_USER
#basic_password=!ZM_PASSWORD
pliablepixels commented 4 years ago

So irrespective of basic auth or not, you get this error?

preeny95 commented 4 years ago

Yep the same error occurs

pliablepixels commented 4 years ago

Can you manually copy over utils.py with the master version so we can see logs and then post debug logs? In my case, it is in /usr/local/lib/python3.6/dist-packages/zmes_hook_helpers/

preeny95 commented 4 years ago
Jun 30 13:43:23 84c042b6ed49 /zm_detect.py[10067]: INF [zmesdetect_m1] [Importing local classes for Yolo/Face]
Jun 30 13:43:24 84c042b6ed49 /zm_detect.py[10067]: ERR [zmesdetect_m1] [Invalid model all]

Looked to be an issue with my config setting for models. Changed that to yolo and that's working now.

However getting an error with ZM not finding the objdetect.jpg now. New error atleast

Jun 30 13:47:25 84c042b6ed49 web_php[9359]: FAT [File /var/cache/zoneminder/events/1/2020-06-30/4671/objdetect.jpg does not exist. Please make sure store_frame_in_zm is enabled in the object detection config]
pliablepixels commented 4 years ago

I get the feeling dlandon's 15.5 package doesn't include some more recent changes. Let me publish a new release in a bit.

preeny95 commented 4 years ago

I've just restarted the container and it's all working perfectly now. I had to copy the changes from the utils.py again aswell though so it does look like somethings missing.

stale[bot] commented 4 years ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.