blakeblackshear / frigate

NVR with realtime local object detection for IP cameras
https://frigate.video
MIT License
18.71k stars 1.7k forks source link

[Support]: 11th camera stops showing live view or detections #4756

Closed protocol6v closed 1 year ago

protocol6v commented 1 year ago

Describe the problem you are having

Added an eleventh camera, and the preview box, live view, and birdseye view all are blank/black. It does not create new detections/events, but does still perform full recordings.

Version

0.11.1-2eada21

Frigate config file

#ui:
#  use_experimental: true
mqtt:
  # Required: host name
  host: 10.0.220.20
  # Optional: port (default: shown below)
  port: 1883
  # Optional: topic prefix (default: shown below)
  # NOTE: must be unique if you are running multiple instances
  topic_prefix: frigate
  # Optional: client id (default: shown below)
  # NOTE: must be unique if you are running multiple instances
  client_id: frigate
  # Optional: user
  user: frigate
  # Optional: password
  # NOTE: MQTT password can be specified with an environment variables that must begin with 'FRIGATE_'.
  #       e.g. password: '{FRIGATE_MQTT_PASSWORD}'
  password: frigatepassword
  # Optional: tls_ca_certs for enabling TLS using self-signed certs (default: None)
  #tls_ca_certs: /path/to/ca.crt
  # Optional: tls_client_cert and tls_client key in order to use self-signed client
  # certificates (default: None)
  # NOTE: certificate must not be password-protected
  #       do not set user and password when using a client certificate
  #tls_client_cert: /path/to/client.crt
  #tls_client_key: /path/to/client.key
  # Optional: tls_insecure (true/false) for enabling TLS verification of
  # the server hostname in the server certificate (default: None)
  #tls_insecure: false
  # Optional: interval in seconds for publishing stats (default: shown below)
  stats_interval: 60

# Optional: Detectors configuration. Defaults to a single CPU detector
detectors:
  # Required: name of the detector
  coral:
    # Required: type of the detector
    # Valid values are 'edgetpu' (requires device property below) and 'cpu'.
    type: edgetpu
    # Optional: device name as defined here: https://coral.ai/docs/edgetpu/multiple-edgetpu/#using-the-tensorflow-lite-python-api
    device: usb
    # Optional: num_threads value passed to the tflite.Interpreter (default: shown below)
    # This value is only used for CPU types
    num_threads: 4

# Optional: Database configuration
#database:
  # The path to store the SQLite DB (default: shown below)
  #path: /media/frigate/frigate.db

# Optional: model modifications
#model:
  # Optional: path to the model (default: automatic based on detector)
  #path: /edgetpu_model.tflite
  # Optional: path to the labelmap (default: shown below)
  #labelmap_path: /labelmap.txt
  # Required: Object detection model input width (default: shown below)
  #width: 320
  # Required: Object detection model input height (default: shown below)
  #height: 320
  # Optional: Label name modifications. These are merged into the standard labelmap.
  #labelmap:
  #  2: vehicle

# Optional: logger verbosity settings
logger:
  # Optional: Default log verbosity (default: shown below)
  default: info
  # Optional: Component specific logger overrides
#  logs:
#    frigate.event: debug

# Optional: set environment variables
#environment_vars:
 # EXAMPLE_VAR: value

# Optional: birdseye configuration
birdseye:
  # Optional: Enable birdseye view (default: shown below)
  enabled: True
  # Optional: Width of the output resolution (default: shown below)
  width: 1920
  # Optional: Height of the output resolution (default: shown below)
  height: 1080
  # Optional: Encoding quality of the mpeg1 feed (default: shown below)
  # 1 is the highest quality, and 31 is the lowest. Lower quality feeds utilize less CPU resources.
  quality: 5
  # Optional: Mode of the view. Available options are: objects, motion, and continuous
  #   objects - cameras are included if they have had a tracked object within the last 30 seconds
  #   motion - cameras are included if motion was detected in the last 30 seconds
  #   continuous - all cameras are included always
  mode: continuous

# Optional: ffmpeg configuration
ffmpeg:
  # Optional: global ffmpeg args (default: shown below)
  global_args: -hide_banner -loglevel warning
  # Optional: global hwaccel args (default: shown below)
  # NOTE: See hardware acceleration docs for your specific device
  hwaccel_args: -hwaccel vaapi -hwaccel_device /dev/dri/renderD128 -hwaccel_output_format yuv420p
  # Optional: global input args (default: shown below)
  input_args: -avoid_negative_ts make_zero -fflags +genpts+discardcorrupt -rtsp_transport udp -use_wallclock_as_timestamps 1
  # Optional: global output args
  output_args:
    # Optional: output args for detect streams (default: shown below)
    detect: -f rawvideo -pix_fmt yuv420p
    # Optional: output args for record streams (default: shown below)
    record: -f segment -segment_time 10 -segment_format mp4 -reset_timestamps 1 -strftime 1 -c:v copy -c:a aac
    # Optional: output args for rtmp streams (default: shown below)
    rtmp: -c copy -f flv

# Optional: Detect configuration
# NOTE: Can be overridden at the camera level
detect:
  # Optional: width of the frame for the input with the detect role (default: shown below)
  #width: 1280
  # Optional: height of the frame for the input with the detect role (default: shown below)
  #height: 720
  # Optional: desired fps for your camera for the input with the detect role (default: shown below)
  # NOTE: Recommended value of 5. Ideally, try and reduce your FPS on the camera.
  #fps: 5
  # Optional: enables detection for the camera (default: True)
  # This value can be set via MQTT and will be updated in startup based on retained value
  enabled: True
  # Optional: Number of frames without a detection before frigate considers an object to be gone. (default: 5x the frame rate)
  max_disappeared: 50
  # Optional: Configuration for stationary object tracking
  stationary:
    # Optional: Frequency for running detection on stationary objects (default: shown below)
    # When set to 0, object detection will never be run on stationary objects. If set to 10, it will be run on every 10th frame.
    interval: 0
    # Optional: Number of frames without a position change for an object to be considered stationary (default: 10x the frame rate or 10s)
    threshold: 50
    # Optional: Define a maximum number of frames for tracking a stationary object (default: not set, track forever)
    # This can help with false positives for objects that should only be stationary for a limited amount of time.
    # It can also be used to disable stationary object tracking. For example, you may want to set a value for person, but leave
    # car at the default.
    # WARNING: Setting these values overrides default behavior and disables stationary object tracking.
    #          There are very few situations where you would want it disabled. It is NOT recommended to
    #          copy these values from the example config into your config unless you know they are needed.
    max_frames:
      # Optional: Default for all object types (default: not set, track forever)
      default: 300
      # Optional: Object specific values
      objects:
        person: 1000

# Optional: Object configuration
# NOTE: Can be overridden at the camera level
objects:
  # Optional: list of objects to track from labelmap.txt (default: shown below)
  track:
    - person
    - cat
    - dog
    - car
  # Optional: mask to prevent all object types from being detected in certain areas (default: no mask)
  # Checks based on the bottom center of the bounding box of the object.
  # NOTE: This mask is COMBINED with the object type specific mask below
  #mask: 0,0,1000,0,1000,200,0,200
  # Optional: filters to reduce false positives for specific object types
#  filters:
#    person:
      # Optional: minimum width*height of the bounding box for the detected object (default: 0)
   #   min_area: 5000
      # Optional: maximum width*height of the bounding box for the detected object (default: 24000000)
   #   max_area: 100000
      # Optional: minimum score for the object to initiate tracking (default: shown below)
   #   min_score: 0.5
      # Optional: minimum decimal percentage for tracked object's computed score to be considered a true positive (default: shown below)
   #   threshold: 0.7
      # Optional: mask to prevent this object type from being detected in certain areas (default: no mask)
      # Checks based on the bottom center of the bounding box of the object
   #   mask: 0,0,1000,0,1000,200,0,200

# Optional: Motion configuration
# NOTE: Can be overridden at the camera level
motion:
  # Optional: The threshold passed to cv2.threshold to determine if a pixel is different enough to be counted as motion. (default: shown below)
  # Increasing this value will make motion detection less sensitive and decreasing it will make motion detection more sensitive.
  # The value should be between 1 and 255.
  threshold: 50
  # Optional: Minimum size in pixels in the resized motion image that counts as motion (default: 30)
  # Increasing this value will prevent smaller areas of motion from being detected. Decreasing will
  # make motion detection more sensitive to smaller moving objects.
  # As a rule of thumb:
  #  - 15 - high sensitivity
  #  - 30 - medium sensitivity
  #  - 50 - low sensitivity
  contour_area: 20
  # Optional: Alpha value passed to cv2.accumulateWeighted when averaging the motion delta across multiple frames (default: shown below)
  # Higher values mean the current frame impacts the delta a lot, and a single raindrop may register as motion.
  # Too low and a fast moving person wont be detected as motion.
  delta_alpha: 0.2
  # Optional: Alpha value passed to cv2.accumulateWeighted when averaging frames to determine the background (default: shown below)
  # Higher values mean the current frame impacts the average a lot, and a new object will be averaged into the background faster.
  # Low values will cause things like moving shadows to be detected as motion for longer.
  # https://www.geeksforgeeks.org/background-subtraction-in-an-image-using-concept-of-running-average/
  frame_alpha: 0.2
  # Optional: Height of the resized motion frame  (default: 50)
  # This operates as an efficient blur alternative. Higher values will result in more granular motion detection at the expense
  # of higher CPU usage. Lower values result in less CPU, but small changes may not register as motion.
  frame_height: 50
  # Optional: motion mask
  # NOTE: see docs for more detailed info on creating masks
 # mask: 0,900,1080,900,1080,1920,0,1920
  # Optional: improve contrast (default: shown below)
  # Enables dynamic contrast improvement. This should help improve night detections at the cost of making motion detection more sensitive
  # for daytime.
  improve_contrast: False

# Optional: Record configuration
# NOTE: Can be overridden at the camera level
record:
  # Optional: Enable recording (default: shown below)
  # WARNING: Frigate does not currently support limiting recordings based
  #          on available disk space automatically. If using recordings,
  #          you must specify retention settings for a number of days that
  #          will fit within the available disk space of your drive or Frigate
  #          will crash.
  enabled: True
  # Optional: Number of minutes to wait between cleanup runs (default: shown below)
  # This can be used to reduce the frequency of deleting recording segments from disk if you want to minimize i/o
  expire_interval: 300
  # Optional: Retention settings for recording
  retain:
    # Optional: Number of days to retain recordings regardless of events (default: shown below)
    # NOTE: This should be set to 0 and retention should be defined in events section below
    #       if you only want to retain recordings of events.
    days: 14
    # Optional: Mode for retention. Available options are: all, motion, and active_objects
    #   all - save all recording segments regardless of activity
    #   motion - save all recordings segments with any detected motion
    #   active_objects - save all recording segments with active/moving objects
    # NOTE: this mode only applies when the days setting above is greater than 0
    mode: all
  # Optional: Event recording settings
  events:
    # Optional: Maximum length of time to retain video during long events. (default: shown below)
    # NOTE: If an object is being tracked for longer than this amount of time, the retained recordings
    #       will be the last x seconds of the event unless retain->days under record is > 0.
    #max_seconds: 300
    # Optional: Number of seconds before the event to include (default: shown below)
    pre_capture: 10
    # Optional: Number of seconds after the event to include (default: shown below)
    post_capture: 10
    # Optional: Objects to save recordings for. (default: all tracked objects)
    objects:
      - person
      - cat
      - dog
      - car
    # Optional: Restrict recordings to objects that entered any of the listed zones (default: no required zones)
    required_zones: []
    # Optional: Retention settings for recordings of events
    retain:
      # Required: Default retention days (default: shown below)
      default: 30
      # Optional: Mode for retention. (default: shown below)
      #   all - save all recording segments for events regardless of activity
      #   motion - save all recordings segments for events with any detected motion
      #   active_objects - save all recording segments for event with active/moving objects
      #
      # NOTE: If the retain mode for the camera is more restrictive than the mode configured
      #       here, the segments will already be gone by the time this mode is applied.
      #       For example, if the camera retain mode is "motion", the segments without motion are
      #       never stored, so setting the mode to "all" here won't bring them back.
      mode: motion
      # Optional: Per object retention days
#      objects:
#        person: 15

# Optional: Configuration for the jpg snapshots written to the clips directory for each event
# NOTE: Can be overridden at the camera level
snapshots:
  # Optional: Enable writing jpg snapshot to /media/frigate/clips (default: shown below)
  # This value can be set via MQTT and will be updated in startup based on retained value
  enabled: True
  # Optional: print a timestamp on the snapshots (default: shown below)
  timestamp: True
  # Optional: draw bounding box on the snapshots (default: shown below)
  bounding_box: True
  # Optional: crop the snapshot (default: shown below)
  crop: False
  # Optional: height to resize the snapshot to (default: original size)
#  height: 175
  # Optional: Restrict snapshots to objects that entered any of the listed zones (default: no required zones)
  required_zones: []
  # Optional: Camera override for retention settings (default: global values)
  retain:
    # Required: Default retention days (default: shown below)
    default: 10
    # Optional: Per object retention days
    objects:
      person: 15

# Optional: RTMP configuration
# NOTE: Can be overridden at the camera level
rtmp:
  # Optional: Enable the RTMP stream (default: True)
  enabled: True

# Optional: Live stream configuration for WebUI
# NOTE: Can be overridden at the camera level
live:
  # Optional: Set the height of the live stream. (default: 720)
  # This must be less than or equal to the height of the detect stream. Lower resolutions
  # reduce bandwidth required for viewing the live stream. Width is computed to match known aspect ratio.
  height: 704
  # Optional: Set the encode quality of the live stream (default: shown below)
  # 1 is the highest quality, and 31 is the lowest. Lower quality feeds utilize less CPU resources.
  quality: 5

# Optional: in-feed timestamp style configuration
# NOTE: Can be overridden at the camera level
timestamp_style:
  # Optional: Position of the timestamp (default: shown below)
  #           "tl" (top left), "tr" (top right), "bl" (bottom left), "br" (bottom right)
  position: "tl"
  # Optional: Format specifier conform to the Python package "datetime" (default: shown below)
  #           Additional Examples:
  #             german: "%d.%m.%Y %H:%M:%S"
  format: "%m/%d/%Y %H:%M:%S"
  # Optional: Color of font
  color:
    # All Required when color is specified (default: shown below)
    red: 255
    green: 255
    blue: 255
  # Optional: Line thickness of font (default: shown below)
  thickness: 2
  # Optional: Effect of lettering (default: shown below)
  #           None (No effect),
  #           "solid" (solid background in inverse color of font)
  #           "shadow" (shadow for font)
#  effect: None

# Required
cameras:
  # Required: name of the camera
  01_HouseFront:
    # Required: ffmpeg settings for the camera
    ffmpeg:
      # Required: A list of input streams for the camera. See documentation for more information.
      inputs:
       # Required: the path to the stream
       # NOTE: path may include environment variables, which must begin with 'FRIGATE_' and be referenced in {}
        - path: rtsp://frigate:camerafrigatepassword@10.0.230.45:554/cam/realmonitor?channel=1&subtype=1
          # Required: list of roles for this stream. valid values are: detect,record,rtmp
          # NOTICE: In addition to assigning the record, and rtmp roles,
          # they must also be enabled in the camera config.
          roles:
            - detect
            - rtmp
        - path: rtsp://frigate:camerafrigatepassword@10.0.230.45:554/cam/realmonitor?channel=1&subtype=0
          roles:
            - record
          # Optional: stream specific global args (default: inherit)
          # global_args:
          # Optional: stream specific hwaccel args (default: inherit)
          # hwaccel_args:
          # Optional: stream specific input args (default: inherit)
          # input_args:
      # Optional: camera specific global args (default: inherit)
      # global_args:
      # Optional: camera specific hwaccel args (default: inherit)
      # hwaccel_args:
      # Optional: camera specific input args (default: inherit)
      # input_args:
      # Optional: camera specific output args (default: inherit)
      # output_args:

    # Optional: timeout for highest scoring image before allowing it
    # to be replaced by a newer image. (default: shown below)
 #   best_image_timeout: 60

    # Optional: zones for this camera
 #   zones:
      # Required: name of the zone
      # NOTE: This must be different than any camera names, but can match with another zone on another
      #       camera.
 #     front_steps:
        # Required: List of x,y coordinates to define the polygon of the zone.
        # NOTE: Presence in a zone is evaluated only based on the bottom center of the objects bounding box.
 #       coordinates: 545,1077,747,939,788,805
        # Optional: List of objects that can trigger this zone (default: all tracked objects)
 #       objects:
 #         - person
        # Optional: Zone level object filters.
        # NOTE: The global and camera filters are applied upstream.
 #       filters:
 #         person:
 #           min_area: 5000
 #           max_area: 100000
 #           threshold: 0.7
  02_GarageFront:
    ffmpeg:
      inputs:
        - path: rtsp://frigate:camerafrigatepassword@10.0.230.44:554/cam/realmonitor?channel=1&subtype=1
          roles:
            - detect
        - path: rtsp://frigate:camerafrigatepassword@10.0.230.44:554/cam/realmonitor?channel=1&subtype=0
          roles:
            - record
            - rtmp
  03_HouseLeft:
    ffmpeg:
      inputs:
        - path: rtsp://frigate:camerafrigatepassword@10.0.230.49:554/cam/realmonitor?channel=1&subtype=1
          roles:
            - detect
        - path: rtsp://frigate:camerafrigatepassword@10.0.230.49:554/cam/realmonitor?channel=1&subtype=0
          roles:
            - record
            - rtmp
  04_HouseRear:
    ffmpeg:
      inputs:
        - path: rtsp://frigate:camerafrigatepassword@10.0.230.43:554/cam/realmonitor?channel=1&subtype=1
          roles:
            - detect
            - rtmp
        - path: rtsp://frigate:camerafrigatepassword@10.0.230.43:554/cam/realmonitor?channel=1&subtype=0
          roles:
            - record
  05_GarageRear:
    ffmpeg:
      inputs:
        - path: rtsp://frigate:camerafrigatepassword@10.0.230.42:554/cam/realmonitor?channel=1&subtype=1
          roles:
            - detect
            - rtmp
        - path: rtsp://frigate:camerafrigatepassword@10.0.230.42:554/cam/realmonitor?channel=1&subtype=0
          roles:
            - record
  06_YardNorthWest:
    ffmpeg:
      inputs:
        - path: rtsp://frigate:camerafrigatepassword@10.0.230.50:554/cam/realmonitor?channel=1&subtype=1
          roles:
            - detect
            - rtmp
        - path: rtsp://frigate:camerafrigatepassword@10.0.230.50:554/cam/realmonitor?channel=1&subtype=0
          roles:
            - record
  07_YardSouthEast:
    ffmpeg:
      inputs:
        - path: rtsp://frigate:camerafrigatepassword@10.0.230.41:554/cam/realmonitor?channel=1&subtype=1
          roles:
            - detect
            - rtmp
        - path: rtsp://frigate:camerafrigatepassword@10.0.230.41:554/cam/realmonitor?channel=1&subtype=0
          roles:
            - record
  08_MudRoom:
    ffmpeg:
      inputs:
        - path: rtsp://frigate:camerafrigatepassword@10.0.230.48:554/cam/realmonitor?channel=1&subtype=1
          roles:
            - detect
            - rtmp
        - path: rtsp://frigate:camerafrigatepassword@10.0.230.48:554/cam/realmonitor?channel=1&subtype=0
          roles:
            - record
  09_MaxRoom:
    ffmpeg:
      inputs:
        - path: rtsp://frigate:camerafrigatepassword@10.0.230.51:554/cam/realmonitor?channel=1&subtype=1
          roles:
            - detect
            - rtmp
        - path: rtsp://frigate:camerafrigatepassword@10.0.230.51:554/cam/realmonitor?channel=1&subtype=0
          roles:
            - record
  10_Basement:
    ffmpeg:
      inputs:
        - path: rtsp://frigate:camerafrigatepassword@10.0.230.46:554/cam/realmonitor?channel=1&subtype=1
          roles:
            - detect
            - rtmp
        - path: rtsp://frigate:camerafrigatepassword@10.0.230.46:554/cam/realmonitor?channel=1&subtype=0
          roles:
            - record
  11_Office:
    ffmpeg:
      inputs:
        - path: rtsp://frigate:camerafrigatepassword@10.0.230.47:554/cam/realmonitor?channel=1&subtype=1
          roles:
            - detect
            - rtmp
        - path: rtsp://frigate:camerafrigatepassword@10.0.230.47:554/cam/realmonitor?channel=1&subtype=0
          roles:
            - record
    # Optional: Configuration for the jpg snapshots published via MQTT
    mqtt:
      # Optional: Enable publishing snapshot via mqtt for camera (default: shown below)
      # NOTE: Only applies to publishing image data to MQTT via 'frigate/<camera_name>/<object_name>/snapshot'.
      # All other messages will still be published.
      enabled: True
      # Optional: print a timestamp on the snapshots (default: shown below)
      timestamp: True
      # Optional: draw bounding box on the snapshots (default: shown below)
      bounding_box: True
      # Optional: crop the snapshot (default: shown below)
      crop: True
      # Optional: height to resize the snapshot to (default: shown below)
      height: 270
      # Optional: jpeg encode quality (default: shown below)
      quality: 70
      # Optional: Restrict mqtt messages to objects that entered any of the listed zones (default: no required zones)
      required_zones: []

Relevant log output

[2022-12-20 09:56:50] frigate.app                    INFO    : Starting Frigate (0.11.1-2eada21)
Starting migrations
[2022-12-20 09:56:50] peewee_migrate                 INFO    : Starting migrations
There is nothing to migrate
[2022-12-20 09:56:50] peewee_migrate                 INFO    : There is nothing to migrate
[2022-12-20 09:56:51] detector.coral                 INFO    : Starting detection process: 217
[2022-12-20 09:56:51] frigate.app                    INFO    : Output process started: 219
[2022-12-20 09:56:51] ws4py                          INFO    : Using epoll
[2022-12-20 09:56:51] frigate.edgetpu                INFO    : Attempting to load TPU as usb
[2022-12-20 09:56:51] frigate.app                    INFO    : Camera processor started for 01_HouseFront: 223
[2022-12-20 09:56:54] frigate.edgetpu                INFO    : TPU found
[2022-12-20 09:56:51] frigate.app                    INFO    : Camera processor started for 02_GarageFront: 226
[2022-12-20 09:56:51] frigate.app                    INFO    : Camera processor started for 03_HouseLeft: 228
[2022-12-20 09:56:51] frigate.app                    INFO    : Camera processor started for 04_HouseRear: 231
[2022-12-20 09:56:51] frigate.app                    INFO    : Camera processor started for 05_GarageRear: 232
[2022-12-20 09:56:51] frigate.app                    INFO    : Camera processor started for 06_YardNorthWest: 233
[2022-12-20 09:56:51] frigate.app                    INFO    : Camera processor started for 07_YardSouthEast: 235
[2022-12-20 09:56:51] frigate.app                    INFO    : Camera processor started for 08_MudRoom: 236
[2022-12-20 09:56:51] frigate.app                    INFO    : Camera processor started for 09_MaxRoom: 237
[2022-12-20 09:56:51] frigate.app                    INFO    : Camera processor started for 10_Basement: 239
[2022-12-20 09:56:51] frigate.app                    INFO    : Camera processor started for 11_Office: 240
[2022-12-20 09:56:51] frigate.app                    INFO    : Capture process started for 01_HouseFront: 242
[2022-12-20 09:56:51] frigate.app                    INFO    : Capture process started for 02_GarageFront: 245
[2022-12-20 09:56:51] frigate.app                    INFO    : Capture process started for 03_HouseLeft: 248
[2022-12-20 09:56:51] frigate.app                    INFO    : Capture process started for 04_HouseRear: 261
[2022-12-20 09:56:51] frigate.app                    INFO    : Capture process started for 05_GarageRear: 264
[2022-12-20 09:56:51] frigate.mqtt                   INFO    : Turning off recordings for 09_MaxRoom via mqtt
[2022-12-20 09:56:51] frigate.mqtt                   INFO    : Turning off snapshots for 09_MaxRoom via mqtt
[2022-12-20 09:56:51] frigate.mqtt                   INFO    : Turning off detection for 09_MaxRoom via mqtt
[2022-12-20 09:56:51] frigate.app                    INFO    : Capture process started for 06_YardNorthWest: 269
[2022-12-20 09:56:51] frigate.app                    INFO    : Capture process started for 07_YardSouthEast: 278
[2022-12-20 09:56:51] frigate.app                    INFO    : Capture process started for 08_MudRoom: 281
[2022-12-20 09:56:51] frigate.app                    INFO    : Capture process started for 09_MaxRoom: 285
[2022-12-20 09:56:51] frigate.app                    INFO    : Capture process started for 10_Basement: 291
[2022-12-20 09:56:51] frigate.app                    INFO    : Capture process started for 11_Office: 296
[2022-12-20 09:56:51] ws4py                          INFO    : Using epoll
[2022-12-20 09:56:58] ws4py                          INFO    : Managing websocket [Local => 127.0.0.1:8082 | Remote => 127.0.0.1:46486]

FFprobe output from your camera

can't get output from ffprobe for any camera. help me understand how to run this, I keep getting method describe failed :404

Frigate stats

No response

Operating system

Other Linux

Install method

Docker Compose

Coral version

USB

Network connection

Wired

Camera make and model

Amcrest IP4M

Any other information that may be helpful

Ubuntu 22.04, HW accel with AMD W4100

NickM-27 commented 1 year ago

A few things:

detectors:
  # Required: name of the detector
  coral:
    # Required: type of the detector
    # Valid values are 'edgetpu' (requires device property below) and 'cpu'.
    type: edgetpu
    # Optional: device name as defined here: https://coral.ai/docs/edgetpu/multiple-edgetpu/#using-the-tensorflow-lite-python-api
    device: usb
    # Optional: num_threads value passed to the tflite.Interpreter (default: shown below)
    # This value is only used for CPU types
    num_threads: 4

there is no need for you to set num_threads as that option is not used for the USB coral.

I also noticed you haven't set detect -> height / width for any of your cameras which means if those streams don't have a native size of 720x1080 then frigate is spending CPU resources resizing all streams to that size, so I'd recommend setting the actual sizes of those streams for each camera.

As far as the actual issues goes, what does the debug page show for camera, process, detection fps for the camera? What happens when you view the debug live view with bounding boxes enabled, do you see objects being tagged?

protocol6v commented 1 year ago

Thanks for the input! I have made those adjustments as recommended. Only downside now is the preview/live views for some cameras as stretched. Is there any fix for this to set the aspect ratio?

Back on the original issue: The debug page shows nearly identical statistics for the non-functioning camera as all the other cameras (Camera in issue is 10 Basement)...

image

In debug live view with bounding boxes, it says displaying at 5.0 fps but does not show anything, and does not display boxes when objects enter view, nor does it log a new detection event.

If I remove any one camera, 10 Basement starts working fine again. Seems to just be after adding an eleventh.

NickM-27 commented 1 year ago

Only downside now is the preview/live views for some cameras as stretched. Is there any fix for this to set the aspect ratio?

If you set the size same as the native stream then it won't be stretched / resized at all, maybe an example of what you're talking about and the stream / config would be helpful.

What is your CPU usage at? Sounds like it or your hardware acceleration might be maxed out and not working for the extra camera. You might try setting

11_office:
  ffmpeg:
    hwaccel_args: []

for that camera and see if it works

protocol6v commented 1 year ago

That did the trick, but im not seeing very high CPU (Averages 45% highest) or GPU usage (Radeontop showing spikes of ~10% but generally running around 5%). Any ideas on what may be wrong there? It is a AMD w4100 for the hwaccel.

For the stretching, i changed the global detect config to this, as this is the dimension of the majority of the cameras:

detect:
  # Optional: width of the frame for the input with the detect role (default: shown below)
  width: 704
  # Optional: height of the frame for the input with the detect role (default: shown below)
  height: 480
  # Optional: desired fps for your camera for the input with the detect role (default: shown below)
  # NOTE: Recommended value of 5. Ideally, try and reduce your FPS on the camera.
  fps: 15

and for the other cameras, I specified their dimension in their individual configs such as:

 11_Office:
    detect:
      width: 640
      height: 480
    ffmpeg:
      hwaccel_args: []

Did I do this wrong?

NickM-27 commented 1 year ago

an example of where you see the sretching might be helpful, you should also set the live -> height individually for the cameras that don't match the global size.

That did the trick, but im not seeing very high CPU (Averages 45% highest) or GPU usage (Radeontop showing spikes of ~10% but generally running around 5%). Any ideas on what may be wrong there? It is a AMD w4100 for the hwaccel.

A GPU only has so many stream processors, it is not the case that it can go all the way to 100% use with as many streams as it takes to get there. Also depends where you're looking in radeontop, as that is more aggregated of different parts of the GPU meanwhile the stream processing is handled by a very specific hardware component.

protocol6v commented 1 year ago

Makes sense, thanks for explaining. I guess I will limit HW accell to 10 until I get a better card!

When not stretched (But yes, otherwise terrible quality)

image

When stretched:

image

And also birdseye has black bars on all camera sides now:

Whats throwing me off is that the height is the same on all cameras, it is the width that varies (either 704 or 640). I tried setting live dimension on them but didnt help.

NickM-27 commented 1 year ago

I think you might be misunderstanding. 640x480 is not 16:9 it is 4:3 so actually the top image is the one that is stretched and the second image is what the camera is actually presenting. This is very common with dahua, amcrest, etc. cameras where the main stream is 16:9 and the sub stream is 4:3

NickM-27 commented 1 year ago

assuming that the detect -> width / height are correct, frigate will not adjust the aspect ratio of the camera feed anywhere in the UI

protocol6v commented 1 year ago

Oh, i'm sure i am the problem here. So is there any way to "fix" this, or should I just revert back to how it was? It seems like the Detect is set correctly for each cameras substream resolution.

NickM-27 commented 1 year ago

I mean if you prefer them all to be 16:9 you are more than welcome to set the resolution to 720x1080 (I'd still recommend setting it explicitly so it is clear how things are working when you view in the future). You only need to be aware that doing so will increase the CPU usage somewhat.

protocol6v commented 1 year ago

What I am not understanding is why when the resolution is set to the "correct" substream resolution, lets use the yardnortheast camera as an example, the image appears to be squished width wise or stretched vertically, but when the resolutions are not set, it displays properly? Even for cameras whos substream is 4:3, they display properly when the resolution is not explicitly set.

NickM-27 commented 1 year ago

the image appears to be squished width wise or stretched vertically, but when the resolutions are not set, it displays properly? Even for cameras whos substream is 4:3, they display properly when the resolution is not explicitly set.

your definition of properly is incorrect in this context. If a stream is 4:3 and the detect resolution is set to the native value then it is showing exactly as the stream is coming from the camera. I'd encourage you to use VLC or some other RTSP viewer to view the sub stream and main stream directly, you'll notice the difference in aspect ratio because the camera is giving the substream to frigate with that squished view, frigate is not introducing it.

NickM-27 commented 1 year ago

this of course makes sense, because the camera is presenting the sub stream as a 16:9 image frame that is 4:3 (in this case 640x480)

NickM-27 commented 1 year ago

Here is an example directly from my amcrest camera

Screen Shot 2022-12-20 at 10 21 42 AM

protocol6v commented 1 year ago

It seems like the issue is on what the camera is sending, as in the amcrest settings page, for the substream, it is set fto 640x480, but when I connected to it with VLC it is 16:9 not 4:3.

EDIT: yep, exactly what you showed.

NickM-27 commented 1 year ago

So you can stretch the substream out by setting the resolution to something that is 16:9 (returning it to looking normal) but it will just use more CPU, unfortunately that is a camera issue if it doesn't offer a substream with 16:9 aspect ratio

protocol6v commented 1 year ago

This is definitely seems to be a camera issue, I'm just not wrapping my head around why its acting this way, as when I connect to either the main stream or substream with VLC, they both display (what i call "properly") in 16:9, even though the substream settings on the camera are supposed to be 640x480.

image

NickM-27 commented 1 year ago

That's fine, but what does the media info in VLC show the resolution as for the sub stream? Odds are VLC is just stretching it back out for you

protocol6v commented 1 year ago

It does show as 640x480, actually. I'm going to keep digging into this to try to understand. VLC does seem to be "fixing" it, and if I scale the window up to the size of the main stream it matches up perfectly. If I set it to 4:3 it is squished.

Thank you for all your help!

NickM-27 commented 1 year ago

Will go ahead and close this then as the frigate side seems to be resolved. Feel free to create a new issue if something else comes up