blakeblackshear / frigate

NVR with realtime local object detection for IP cameras
https://frigate.video
MIT License
19.06k stars 1.74k forks source link

[Config Support]: Tuning for v13? #9560

Closed dopeytree closed 9 months ago

dopeytree commented 9 months ago

Describe the problem you are having

Please could I get some tips on how to tune for v13?

Noticing getting about 10 events for the same time frame.

Modelling seems different compared to v12 i.e birds & people detected where wouldn't expect them.

Is there a way to have exclude objects from a global list? or are per camera settings meant to be a complete override of the global settings? for example would be cool to write object exclude bird on cameras that are inside etc

Screenshot 2024-02-01 at 10 35 16

Version

0.13.1-34FB1C2

Frigate config file

mqtt:
  enabled: true
  host: 192.168.22.2
  port: 1883
   # Optional: topic prefix (default: shown below)
  # WARNING: must be unique if you are running multiple instances
  topic_prefix: frigate

ffmpeg:
   # Optional: global ffmpeg args (default: shown below)
  global_args: -hide_banner -loglevel warning -threads 2
  # Optional: global hwaccel args (default: shown below)
  # NOTE: See hardware acceleration docs for your specific device
  hwaccel_args: preset-vaapi
  # Optional: global input args (default: shown below)
  input_args: preset-rtsp-generic
  # Optional: global output args
  output_args:
    # Optional: output args for detect streams (default: shown below)
    detect: -threads 2 -f rawvideo -pix_fmt yuv420p
    # Optional: output args for record streams (default: shown below)
    record: preset-record-generic-audio-aac
    # Optional: output args for rtmp streams (default: shown below)
    #rtmp: preset-rtmp-generic
  # Optional: Time in seconds to wait before ffmpeg retries connecting to the camera. (default: shown below)
  # If set too low, frigate will retry a connection to the camera's stream too frequently, using up the limited streams some cameras can allow at once
  # If set too high, then if a ffmpeg crash or camera stream timeout occurs, you could potentially lose up to a maximum of retry_interval second(s) of footage
  # NOTE: this can be a useful setting for Wireless / Battery cameras to reduce how much footage is potentially lost during a connection timeout.
  retry_interval: 10

detectors:
  coral:
    type: edgetpu
    device: usb  

snapshots:
  enabled: true
  bounding_box: false
  quality: 100

audio:
  # Optional: Enable audio events (default: shown below)
  # Optional: Audio Events Configuration
  # NOTE: Can be overridden at the camera level
  enabled: true
  # Optional: Configure the amount of seconds without detected audio to end the event (default: shown below)
  max_not_heard: 30
  # Optional: Configure the min rms volume required to run audio detection (default: shown below)
  # As a rule of thumb:
  #  - 200 - high sensitivity
  #  - 500 - medium sensitivity
  #  - 1000 - low sensitivity
  min_volume: 200
  # Optional: Types of audio to listen for (default: shown below)
  listen:
    - fire_alarm
    - scream
    - yell
  # Optional: Filters to configure detection.
  #filters:
    # Label that matches label in listen config.
  #  speech:
      # Minimum score that triggers an audio event (default: shown below)
  #    threshold: 0.8  

# Optional: Object configuration
# NOTE: Can be overridden at the camera level
objects:
  # Optional: list of objects to track from labelmap.txt (default: shown below)
  track:
    - person
    - cat
    - bird
  # Optional: mask to prevent all object types from being detected in certain areas (default: no mask)
  # Checks based on the bottom center of the bounding box of the object.
  # NOTE: This mask is COMBINED with the object type specific mask below
  #mask: 0,0,1000,0,1000,200,0,200
  # Optional: filters to reduce false positives for specific object types
  #filters:
  #  person:
      # Optional: minimum width*height of the bounding box for the detected object (default: 0)
  #    min_area: 5000
      # Optional: maximum width*height of the bounding box for the detected object (default: 24000000)
  #    max_area: 100000
      # Optional: minimum width/height of the bounding box for the detected object (default: 0)
  #    min_ratio: 0.5
      # Optional: maximum width/height of the bounding box for the detected object (default: 24000000)
  #    max_ratio: 2.0
      # Optional: minimum score for the object to initiate tracking (default: shown below)
  #    min_score: 0.5
      # Optional: minimum decimal percentage for tracked object's computed score to be considered a true positive (default: shown below)
  #    threshold: 0.7
      # Optional: mask to prevent this object type from being detected in certain areas (default: no mask)
      # Checks based on the bottom center of the bounding box of the object
      #mask: 0,0,1000,0,1000,200,0,200      

# Optional: Record configuration
# NOTE: Can be overridden at the camera level
record:
  # Optional: Enable recording (default: shown below)
  # WARNING: If recording is disabled in the config, turning it on via
  #          the UI or MQTT later will have no effect.
  enabled: true
  # Optional: Number of minutes to wait between cleanup runs (default: shown below)
  # This can be used to reduce the frequency of deleting recording segments from disk if you want to minimize i/o
  expire_interval: 60
  # Optional: Sync recordings with disk on startup and once a day (default: shown below).
  sync_recordings: false
  # Optional: Retention settings for recording
  retain:
    # Optional: Number of days to retain recordings regardless of events (default: shown below)
    # NOTE: This should be set to 0 and retention should be defined in events section below
    #       if you only want to retain recordings of events.
    days: 7
    # Optional: Mode for retention. Available options are: all, motion, and active_objects
    #   all - save all recording segments regardless of activity
    #   motion - save all recordings segments with any detected motion
    #   active_objects - save all recording segments with active/moving objects
    # NOTE: this mode only applies when the days setting above is greater than 0
    mode: all
  # Optional: Recording Export Settings
  export:
    # Optional: Timelapse Output Args (default: shown below).
    # NOTE: The default args are set to fit 24 hours of recording into 1 hour playback.
    # See https://stackoverflow.com/a/58268695 for more info on how these args work.
    # As an example: if you wanted to go from 24 hours to 30 minutes that would be going
    # from 86400 seconds to 1800 seconds which would be 1800 / 86400 = 0.02.
    # The -r (framerate) dictates how smooth the output video is.
    # So the args would be -vf setpts=0.02*PTS -r 30 in that case.
    timelapse_args: "-vf setpts=0.04*PTS -r 30"
  # Optional: Event recording settings
  events:
    # Optional: Number of seconds before the event to include (default: shown below)
    pre_capture: 5
    # Optional: Number of seconds after the event to include (default: shown below)
    post_capture: 5
    # Optional: Objects to save recordings for. (default: all tracked objects)
    objects:
      - person
      - cat
      - bird
    # Optional: Restrict recordings to objects that entered any of the listed zones (default: no required zones)
    required_zones: []
    # Optional: Retention settings for recordings of events
    retain:
      # Required: Default retention days (default: shown below)
      default: 28
      # Optional: Mode for retention. (default: shown below)
      #   all - save all recording segments for events regardless of activity
      #   motion - save all recordings segments for events with any detected motion
      #   active_objects - save all recording segments for event with active/moving objects
      #
      # NOTE: If the retain mode for the camera is more restrictive than the mode configured
      #       here, the segments will already be gone by the time this mode is applied.
      #       For example, if the camera retain mode is "motion", the segments without motion are
      #       never stored, so setting the mode to "all" here won't bring them back.
      mode: motion
      # Optional: Per object retention days
      objects:
        person: 14
        cat: 28
        bird: 14

# Optional: Motion configuration
# NOTE: Can be overridden at the camera level
motion:
  # Optional: The threshold passed to cv2.threshold to determine if a pixel is different enough to be counted as motion. (default: shown below)
  # Increasing this value will make motion detection less sensitive and decreasing it will make motion detection more sensitive.
  # The value should be between 1 and 255.
  threshold: 20
  # Optional: The percentage of the image used to detect lightning or other substantial changes where motion detection
  #           needs to recalibrate. (default: shown below)
  # Increasing this value will make motion detection more likely to consider lightning or ir mode changes as valid motion.
  # Decreasing this value will make motion detection more likely to ignore large amounts of motion such as a person approaching
  # a doorbell camera.
  lightning_threshold: 0.8
  # Optional: Minimum size in pixels in the resized motion image that counts as motion (default: shown below)
  # Increasing this value will prevent smaller areas of motion from being detected. Decreasing will
  # make motion detection more sensitive to smaller moving objects.
  # As a rule of thumb:
  #  - 10 - high sensitivity
  #  - 30 - medium sensitivity
  #  - 50 - low sensitivity
  contour_area: 20
  # Optional: Alpha value passed to cv2.accumulateWeighted when averaging frames to determine the background (default: shown below)
  # Higher values mean the current frame impacts the average a lot, and a new object will be averaged into the background faster.
  # Low values will cause things like moving shadows to be detected as motion for longer.
  # https://www.geeksforgeeks.org/background-subtraction-in-an-image-using-concept-of-running-average/
  frame_alpha: 0.01
  # Optional: Height of the resized motion frame  (default: 100)
  # Higher values will result in more granular motion detection at the expense of higher CPU usage.
  # Lower values result in less CPU, but small changes may not register as motion.
  frame_height: 100
  # Optional: motion mask
  # NOTE: see docs for more detailed info on creating masks
  #mask: 0,900,1080,900,1080,1920,0,1920
  # Optional: improve contrast (default: shown below)
  # Enables dynamic contrast improvement. This should help improve night detections at the cost of making motion detection more sensitive
  # for daytime.
  improve_contrast: true
  # Optional: Delay when updating camera motion through MQTT from ON -> OFF (default: shown below).
  mqtt_off_delay: 30

# Optional: Detect configuration
# NOTE: Can be overridden at the camera level
detect:
  # Optional: width of the frame for the input with the detect role (default: use native stream resolution)
  width: 640
  # Optional: height of the frame for the input with the detect role (default: use native stream resolution)
  height: 360
  # Optional: desired fps for your camera for the input with the detect role (default: shown below)
  # NOTE: Recommended value of 5. Ideally, try and reduce your FPS on the camera.
  fps: 4
  # Optional: enables detection for the camera (default: True)
  enabled: true
  # Optional: Number of consecutive detection hits required for an object to be initialized in the tracker. (default: 1/2 the frame rate)
  min_initialized: 1
  # Optional: Number of frames without a detection before Frigate considers an object to be gone. (default: 5x the frame rate)
  max_disappeared: 20
  # Optional: Configuration for stationary object tracking
  stationary:
    # Optional: Frequency for confirming stationary objects (default: same as threshold)
    # When set to 1, object detection will run to confirm the object still exists on every frame.
    # If set to 10, object detection will run to confirm the object still exists on every 10th frame.
    interval: 40
    # Optional: Number of frames without a position change for an object to be considered stationary (default: 10x the frame rate or 10s)
    threshold: 40
    # Optional: Define a maximum number of frames for tracking a stationary object (default: not set, track forever)
    # This can help with false positives for objects that should only be stationary for a limited amount of time.
    # It can also be used to disable stationary object tracking. For example, you may want to set a value for person, but leave
    # car at the default.
    # WARNING: Setting these values overrides default behavior and disables stationary object tracking.
    #          There are very few situations where you would want it disabled. It is NOT recommended to
    #          copy these values from the example config into your config unless you know they are needed.
  #  max_frames:
      # Optional: Default for all object types (default: not set, track forever)
    #  default: 3000
      # Optional: Object specific values
  #    objects:
    #    cat: 1000
  #      person: 1000
  # Optional: Milliseconds to offset detect annotations by (default: shown below).
  # There can often be latency between a recording and the detect process,
  # especially when using separate streams for detect and record.
  # Use this setting to make the timeline bounding boxes more closely align
  # with the recording. The value can be positive or negative.
  # TIP: Imagine there is an event clip with a person walking from left to right.
  #      If the event timeline bounding box is consistently to the left of the person
  #      then the value should be decreased. Similarly, if a person is walking from
  #      left to right and the bounding box is consistently ahead of the person
  #      then the value should be increased.
  # TIP: This offset is dynamic so you can change the value and it will update existing
  #      events, this makes it easy to tune.
  # WARNING: Fast moving objects will likely not have the bounding box align.
#  annotation_offset: 0

go2rtc:
   streams:
    barn_catflap HQ : ffmpeg:rtsp://MPG7QbYQ:ICRGvtBU2xo3DxMy@192.168.22.202:554/live/ch0
    road_cam HQ: ffmpeg:rtsp://admin:annke2023@10.0.0.2:554/H265/ch1/main/av_stream
    porch_cam HQ: ffmpeg:rtsp://admin:annke2023@10.0.0.3:554/H265/ch1/main/av_stream
    bird_cam HQ:  ffmpeg:rtsp://admin:annke2023@10.0.0.7:554/H265/ch1/main/av_stream
    barn HQ: ffmpeg:rtsp://admin:annke2023@10.0.0.4:554/H265/ch1/main/av_stream
    # kitchen_cam HQ: ffmpeg:rtsp://admin:annke2023@10.0.0.5:554/H265/ch1/main/av_stream
    # house_cam HQ: ffmpeg:rtsp://admin:annke2023@10.0.0.6:554/H265/ch1/main/av_stream

cameras:
  bird_cam: #Cam Name: Bird Cam (kitchen cat flap)   #Brand: Annke c500   #Quality:3k    #Link:POE
    enabled: true
    ffmpeg:
      inputs:
        - path: rtsp://admin:annke2023@10.0.0.7:554/Streaming/Channels/102 # SQ Link
          input_args: preset-rtsp-generic
          roles:
            - detect
            - audio
        - path: rtsp://192.168.22.2:8554/bird_cam%20HQ?mp4  # HQ Link
          input_args: preset-rtsp-restream 
          roles:
            - record
    audio:
      enabled: True # <- enable audio events for the this camera
    ui: 
      # Optional: Configuration for how camera is handled in the GUI.
      # Optional: Adjust sort order of cameras in the UI. Larger numbers come later (default: shown below)
      # By default the cameras are sorted alphabetically.
      order: 4
      # Optional: Whether or not to show the camera in the Frigate UI (default: shown below)
      dashboard: True

  barn_catflap: #Cam name: Barn Catflap     #Brand: wansview   #Quality:1080p    #Link:WIFI
    enabled: true 
    detect:
      width: 1920
      height: 1080
      fps: 5
    ffmpeg:
      inputs:
        #- path: rtsp://MPG7QbYQ:ICRGvtBU2xo3DxMy@192.168.22.202:554/live/ch1 # SD Link
        #  input_args: preset-rtsp-generic
        #  roles:
        #    - detect
        #    - audio
        - path: rtsp://192.168.22.2:8554/barn_catflap%20HQ?mp4  # HD Link
          input_args: preset-rtsp-restream 
          roles:
            - record
            #
            - detect
            - audio
    audio:
      enabled: True # <- enable audio events for the this camera
    ui: 
      # Optional: Configuration for how camera is handled in the GUI.
      # Optional: Adjust sort order of cameras in the UI. Larger numbers come later (default: shown below)
      # By default the cameras are sorted alphabetically.
      order: 5
      # Optional: Whether or not to show the camera in the Frigate UI (default: shown below)
      dashboard: True

  barn: #Cam name: Barn     #Brand: Annke c500   #Quality:3k    #Link:POE
    enabled: true
    ffmpeg:
      inputs:
        - path: rtsp://admin:annke2023@10.0.0.4:554/Streaming/Channels/102 # SD Link
          input_args: preset-rtsp-generic
          roles:
            - detect
            - audio
        - path: rtsp://192.168.22.2:8554/barn%20HQ?mp4  # HD Link
          input_args: preset-rtsp-restream
          roles:
            - record
    audio:
      enabled: True # <- enable audio events for the this camera
    ui: 
      # Optional: Configuration for how camera is handled in the GUI.
      # Optional: Adjust sort order of cameras in the UI. Larger numbers come later (default: shown below)
      # By default the cameras are sorted alphabetically.
      order: 3
      # Optional: Whether or not to show the camera in the Frigate UI (default: shown below)
      dashboard: True

  porch_cam: #Cam Name: Porch     #Brand: Annke c500   #Quality:3k    #Link:POE
    enabled: true
    ffmpeg:
      inputs:
        - path: rtsp://admin:annke2023@10.0.0.3:554/Streaming/Channels/102 # SQ Link
          input_args: preset-rtsp-generic
          roles:
            - detect
            - audio
        - path: rtsp://192.168.22.2:8554/porch_cam%20HQ?mp4  # HQ Link
          input_args: preset-rtsp-restream
          roles:
            - record
    audio:
      enabled: True # <- enable audio events for the this camera
    ui: 
      # Optional: Configuration for how camera is handled in the GUI.
      # Optional: Adjust sort order of cameras in the UI. Larger numbers come later (default: shown below)
      # By default the cameras are sorted alphabetically.
      order: 1
      # Optional: Whether or not to show the camera in the Frigate UI (default: shown below)
      dashboard: True

  road_cam: #Cam name: Road     #Brand: Annke c500   #Quality:3k    #Link:POE
    enabled: true
    ffmpeg:
      inputs:
        - path: rtsp://admin:annke2023@10.0.0.2:554/Streaming/Channels/102 # SQ Link
          input_args: preset-rtsp-generic
          roles:
            - detect
            - audio
        - path: rtsp://192.168.22.2:8554/road_cam%20HQ?mp4  # HQ Link
          input_args: preset-rtsp-restream
          roles:
            - record
    audio:
      enabled: True # <- enable audio events for the this camera
    ui: 
      # Optional: Configuration for how camera is handled in the GUI.
      # Optional: Adjust sort order of cameras in the UI. Larger numbers come later (default: shown below)
      # By default the cameras are sorted alphabetically.
      order: 2
      # Optional: Whether or not to show the camera in the Frigate UI (default: shown below)
      dashboard: True

# Optional
ui:
  # Optional: Set the default live mode for cameras in the UI (default: shown below)
  live_mode: mse
  # Optional: Set a timezone to use in the UI (default: use browser local time)
  # timezone: America/Denver
  # Optional: Use an experimental recordings / camera view UI (default: shown below)
  use_experimental: false
  # Optional: Set the time format used.
  # Options are browser, 12hour, or 24hour (default: shown below)
  time_format: browser
  # Optional: Set the date style for a specified length.
  # Options are: full, long, medium, short
  # Examples:
  #    short: 2/11/23
  #    medium: Feb 11, 2023
  #    full: Saturday, February 11, 2023
  # (default: shown below).
  date_style: short
  # Optional: Set the time style for a specified length.
  # Options are: full, long, medium, short
  # Examples:
  #    short: 8:14 PM
  #    medium: 8:15:22 PM
  #    full: 8:15:22 PM Mountain Standard Time
  # (default: shown below).
  time_style: medium
  # Optional: Ability to manually override the date / time styling to use strftime format
  # https://www.gnu.org/software/libc/manual/html_node/Formatting-Calendar-Time.html
  # possible values are shown above (default: not set)
  strftime_fmt: "%Y/%m/%d %H:%M"

# Optional: Telemetry configuration
telemetry:
  # Optional: Enabled network interfaces for bandwidth stats monitoring (default: empty list, let nethogs search all)
  network_interfaces:
    - eth
    - enp
    - eno
    - ens
    - wl
    - lo
  # Optional: Configure system stats
  stats:
    # Enable AMD GPU stats (default: shown below)
    amd_gpu_stats: false
    # Enable Intel GPU stats (default: shown below)
    intel_gpu_stats: true
    # Enable network bandwidth stats monitoring for camera ffmpeg processes, go2rtc, and object detectors. (default: shown below)
    # NOTE: The container must either be privileged or have cap_net_admin, cap_net_raw capabilities enabled.
    network_bandwidth: true
  # Optional: Enable the latest version outbound check (default: shown below)
  # NOTE: If you use the HomeAssistant integration, disabling this will prevent it from reporting new versions
  version_check: true

Relevant log output

n/a

Frigate stats

No response

Operating system

UNRAID

Install method

Docker Compose

Coral version

USB

Any other information that may be helpful

No response

dopeytree commented 9 months ago

Might be easier to see what someone else uses for their detect & motion settings

NickM-27 commented 9 months ago

Modelling seems different compared to v12 i.e birds & people detected where wouldn't expect them.

the model used is exactly the same as in 0.12, there was no change

Is there a way to have exclude objects from a global list?

just set the list of objects you want to detect on the cameras that you want to override and not use the global list

objects:
  track:
    - person
    - bird

cameras:
  indoor_cam:
    ffmpeg:
      ...
    objects:
      track:
        - person
dopeytree commented 9 months ago

How does the global list function vs the per camera settings?

If I put cat in both does it do double detection?

NickM-27 commented 9 months ago

If I put cat in both does it do double detection?

no, it is just a list. if you set the list at the camera level then it ignores what is at the global level

dopeytree commented 9 months ago

ok ace thanks @NickM-27

NickM-27 commented 9 months ago

feel free to create a new issue if something else comes up