blakeblackshear / frigate

NVR with realtime local object detection for IP cameras
https://frigate.video
MIT License
18.93k stars 1.72k forks source link

[EdgeTPU Support]: #4887

Closed Richfa closed 1 year ago

Richfa commented 1 year ago

Describe the problem you are having

coral accelerator isnt being detected, i had frigate running on my pi4 and my new home Assistant Yellow has arrived happy days . But the coral device doesn't seem to work with it at the moment?

Screenshot 2023-01-03 at 23 03 30

Version

0.11.1

Frigate config file

mqtt:
  # Required: host name
  host: 
  # Optional: port (default: shown below)
  port: 1883
  # Optional: topic prefix (default: shown below)
  # NOTE: must be unique if you are running multiple instances
  topic_prefix: frigate
  # Optional: client id (default: shown below)
  # NOTE: must be unique if you are running multiple instances
  client_id: frigate
  # Optional: user
  user: mqtt_user
  # Optional: password
  # NOTE: MQTT password can be specified with an environment variables that must begin with 'FRIGATE_'.
  #       e.g. password: '{FRIGATE_MQTT_PASSWORD}'
  password: 
  # Optional: interval in seconds for publishing stats (default: shown below)
  stats_interval: 60

# Optional: Detectors configuration. Defaults to a single CPU detector
detectors:
  # Required: name of the detector
  coral:
    # Required: type of the detector
    # Valid values are 'edgetpu' (requires device property below) and 'cpu'.
    type: edgetpu
    # Optional: device name as defined here: https://coral.ai/docs/edgetpu/multiple-edgetpu/#using-the-tensorflow-lite-python-api
    device: usb
    # Optional: num_threads value passed to the tflite.Interpreter (default: shown below)
    # This value is only used for CPU types
    num_threads: 3

# Optional: Database configuration
database:
  # The path to store the SQLite DB (default: shown below)
  path: /media/frigate/frigate.db

# Optional: logger verbosity settings
logger:
  # Optional: Default log verbosity (default: shown below)
  default: info
  # Optional: Component specific logger overrides
  logs:
    frigate.event: debug

# Optional: set environment variables
environment_vars:
  EXAMPLE_VAR: value

# Optional: birdseye configuration
# NOTE: Can (enabled, mode) be overridden at the camera level
birdseye:
  # Optional: Enable birdseye view (default: shown below)
  enabled: True
  # Optional: Width of the output resolution (default: shown below)
  width: 1280
  # Optional: Height of the output resolution (default: shown below)
  height: 720
  # Optional: Encoding quality of the mpeg1 feed (default: shown below)
  # 1 is the highest quality, and 31 is the lowest. Lower quality feeds utilize less CPU resources.
  quality: 8
  # Optional: Mode of the view. Available options are: objects, motion, and continuous
  #   objects - cameras are included if they have had a tracked object within the last 30 seconds
  #   motion - cameras are included if motion was detected in the last 30 seconds
  #   continuous - all cameras are included always
  mode: objects

# Optional: ffmpeg configuration

# Optional: Detect configuration
# NOTE: Can be overridden at the camera level
detect:
  # Optional: width of the frame for the input with the detect role (default: shown below)
  width: 1280
  # Optional: height of the frame for the input with the detect role (default: shown below)
  height: 720
  # Optional: desired fps for your camera for the input with the detect role (default: shown below)
  # NOTE: Recommended value of 5. Ideally, try and reduce your FPS on the camera.
  fps: 5
  # Optional: enables detection for the camera (default: True)
  # This value can be set via MQTT and will be updated in startup based on retained value
  enabled: True
  # Optional: Number of frames without a detection before frigate considers an object to be gone. (default: 5x the frame rate)
  max_disappeared: 25
  # Optional: Configuration for stationary object tracking
  stationary:
    # Optional: Frequency for confirming stationary objects (default: shown below)
    # When set to 0, object detection will not confirm stationary objects until movement is detected.
    # If set to 10, object detection will run to confirm the object still exists on every 10th frame.
    interval: 0
    # Optional: Number of frames without a position change for an object to be considered stationary (default: 10x the frame rate or 10s)
    threshold: 50
    # Optional: Define a maximum number of frames for tracking a stationary object (default: not set, track forever)
    # This can help with false positives for objects that should only be stationary for a limited amount of time.
    # It can also be used to disable stationary object tracking. For example, you may want to set a value for person, but leave
    # car at the default.
    # WARNING: Setting these values overrides default behavior and disables stationary object tracking.
    #          There are very few situations where you would want it disabled. It is NOT recommended to
    #          copy these values from the example config into your config unless you know they are needed.
    max_frames:
      # Optional: Default for all object types (default: not set, track forever)
      default: 3000
      # Optional: Object specific values
      objects:
        person: 1000

# Optional: Object configuration
# NOTE: Can be overridden at the camera level
objects:
  # Optional: list of objects to track from labelmap.txt (default: shown below)
  track:
    - person
  # Optional: mask to prevent all object types from being detected in certain areas (default: no mask)
  # Checks based on the bottom center of the bounding box of the object.
  # NOTE: This mask is COMBINED with the object type specific mask below
  # Optional: filters to reduce false positives for specific object types
  filters:
    person:
      # Optional: minimum width*height of the bounding box for the detected object (default: 0)
      min_area: 5000
      # Optional: maximum width*height of the bounding box for the detected object (default: 24000000)
      max_area: 100000
      # Optional: minimum width/height of the bounding box for the detected object (default: 0)
      min_ratio: 0.5
      # Optional: maximum width/height of the bounding box for the detected object (default: 24000000)
      max_ratio: 2.0
      # Optional: minimum score for the object to initiate tracking (default: shown below)
      min_score: 0.5
      # Optional: minimum decimal percentage for tracked object's computed score to be considered a true positive (default: shown below)
      threshold: 0.7
      # Optional: mask to prevent this object type from being detected in certain areas (default: no mask)
      # Checks based on the bottom center of the bounding box of the object

# Optional: Motion configuration
# NOTE: Can be overridden at the camera level
motion:
  # Optional: The threshold passed to cv2.threshold to determine if a pixel is different enough to be counted as motion. (default: shown below)
  # Increasing this value will make motion detection less sensitive and decreasing it will make motion detection more sensitive.
  # The value should be between 1 and 255.
  threshold: 25
  # Optional: Minimum size in pixels in the resized motion image that counts as motion (default: 30)
  # Increasing this value will prevent smaller areas of motion from being detected. Decreasing will
  # make motion detection more sensitive to smaller moving objects.
  # As a rule of thumb:
  #  - 15 - high sensitivity
  #  - 30 - medium sensitivity
  #  - 50 - low sensitivity
  contour_area: 30
  # Optional: Alpha value passed to cv2.accumulateWeighted when averaging the motion delta across multiple frames (default: shown below)
  # Higher values mean the current frame impacts the delta a lot, and a single raindrop may register as motion.
  # Too low and a fast moving person wont be detected as motion.
  delta_alpha: 0.2
  # Optional: Alpha value passed to cv2.accumulateWeighted when averaging frames to determine the background (default: shown below)
  # Higher values mean the current frame impacts the average a lot, and a new object will be averaged into the background faster.
  # Low values will cause things like moving shadows to be detected as motion for longer.
  # https://www.geeksforgeeks.org/background-subtraction-in-an-image-using-concept-of-running-average/
  frame_alpha: 0.2
  # Optional: Height of the resized motion frame  (default: 50)
  # This operates as an efficient blur alternative. Higher values will result in more granular motion detection at the expense
  # of higher CPU usage. Lower values result in less CPU, but small changes may not register as motion.
  frame_height: 50
  # Optional: motion mask
  # NOTE: see docs for more detailed info on creating masks
  # Optional: improve contrast (default: shown below)
  # Enables dynamic contrast improvement. This should help improve night detections at the cost of making motion detection more sensitive
  # for daytime.
  improve_contrast: True
  # Optional: Delay when updating camera motion through MQTT from ON -> OFF (default: shown below).
  mqtt_off_delay: 30

# Optional: Record configuration
# NOTE: Can be overridden at the camera level
record:
  # Optional: Enable recording (default: shown below)
  # WARNING: If recording is disabled in the config, turning it on via
  #          the UI or MQTT later will have no effect.
  # WARNING: Frigate does not currently support limiting recordings based
  #          on available disk space automatically. If using recordings,
  #          you must specify retention settings for a number of days that
  #          will fit within the available disk space of your drive or Frigate
  #          will crash.
  enabled: False
  # Optional: Number of minutes to wait between cleanup runs (default: shown below)
  # This can be used to reduce the frequency of deleting recording segments from disk if you want to minimize i/o
  expire_interval: 60
  # Optional: Retention settings for recording
  retain:
    # Optional: Number of days to retain recordings regardless of events (default: shown below)
    # NOTE: This should be set to 0 and retention should be defined in events section below
    #       if you only want to retain recordings of events.
    days: 0
    # Optional: Mode for retention. Available options are: all, motion, and active_objects
    #   all - save all recording segments regardless of activity
    #   motion - save all recordings segments with any detected motion
    #   active_objects - save all recording segments with active/moving objects
    # NOTE: this mode only applies when the days setting above is greater than 0
    mode: all
  # Optional: Event recording settings
  events:
    # Optional: Number of seconds before the event to include (default: shown below)
    pre_capture: 5
    # Optional: Number of seconds after the event to include (default: shown below)
    post_capture: 5
    # Optional: Objects to save recordings for. (default: all tracked objects)
    objects:
      - person
    # Optional: Restrict recordings to objects that entered any of the listed zones (default: no required zones)
    required_zones: []
    # Optional: Retention settings for recordings of events
    retain:
      # Required: Default retention days (default: shown below)
      default: 10
      # Optional: Mode for retention. (default: shown below)
      #   all - save all recording segments for events regardless of activity
      #   motion - save all recordings segments for events with any detected motion
      #   active_objects - save all recording segments for event with active/moving objects
      #
      # NOTE: If the retain mode for the camera is more restrictive than the mode configured
      #       here, the segments will already be gone by the time this mode is applied.
      #       For example, if the camera retain mode is "motion", the segments without motion are
      #       never stored, so setting the mode to "all" here won't bring them back.
      mode: motion
      # Optional: Per object retention days
      objects:
        person: 15

# Optional: Configuration for the jpg snapshots written to the clips directory for each event
# NOTE: Can be overridden at the camera level
snapshots:
  # Optional: Enable writing jpg snapshot to /media/frigate/clips (default: shown below)
  # This value can be set via MQTT and will be updated in startup based on retained value
  enabled: True
  # Optional: save a clean PNG copy of the snapshot image (default: shown below)
  clean_copy: True
  # Optional: print a timestamp on the snapshots (default: shown below)
  timestamp: False
  # Optional: draw bounding box on the snapshots (default: shown below)
  bounding_box: False
  # Optional: crop the snapshot (default: shown below)
  crop: False
  # Optional: height to resize the snapshot to (default: original size)
  height: 175
  # Optional: Restrict snapshots to objects that entered any of the listed zones (default: no required zones)
  required_zones: []
  # Optional: Camera override for retention settings (default: global values)
  retain:
    # Required: Default retention days (default: shown below)
    default: 10
    # Optional: Per object retention days
    objects:
      person: 15

# Optional: RTMP configuration
# NOTE: Can be overridden at the camera level
rtmp:
  # Optional: Enable the RTMP stream (default: True)
  enabled: True

# Optional: Live stream configuration for WebUI
# NOTE: Can be overridden at the camera level
live:
  # Optional: Set the height of the live stream. (default: 720)
  # This must be less than or equal to the height of the detect stream. Lower resolutions
  # reduce bandwidth required for viewing the live stream. Width is computed to match known aspect ratio.
  height: 720
  # Optional: Set the encode quality of the live stream (default: shown below)
  # 1 is the highest quality, and 31 is the lowest. Lower quality feeds utilize less CPU resources.
  quality: 8

# Required
cameras:
  # Required: name of the camera
  garage_n_gate_cam:
    # Required: ffmpeg settings for the camera
    ffmpeg:
      # Required: A list of input streams for the camera. See documentation for more information.
      inputs:
        # Required: the path to the stream
        # NOTE: path may include environment variables, which must begin with 'FRIGATE_' and be referenced in {}
        - path: 
          # Required: list of roles for this stream. valid values are: detect,record,rtmp
          # NOTICE: In addition to assigning the record, and rtmp roles,
          # they must also be enabled in the camera config.
          roles:
            - detect
            - rtmp
          # Optional: stream specific global args (default: inherit)
          # global_args:
          # Optional: stream specific hwaccel args (default: inherit)
          # hwaccel_args:
          # Optional: stream specific input args (default: inherit)
          # input_args:
      # Optional: camera specific global args (default: inherit)
      # global_args:
      # Optional: camera specific hwaccel args (default: inherit)
      # hwaccel_args:
      # Optional: camera specific input args (default: inherit)
      # input_args:
      # Optional: camera specific output args (default: inherit)
      # output_args:

    # Optional: timeout for highest scoring image before allowing it
    # to be replaced by a newer image. (default: shown below)
    best_image_timeout: 60

    # Optional: Configuration for the jpg snapshots published via MQTT
    mqtt:
      # Optional: Enable publishing snapshot via mqtt for camera (default: shown below)
      # NOTE: Only applies to publishing image data to MQTT via 'frigate/<camera_name>/<object_name>/snapshot'.
      # All other messages will still be published.
      enabled: True
      # Optional: print a timestamp on the snapshots (default: shown below)
      timestamp: True
      # Optional: draw bounding box on the snapshots (default: shown below)
      bounding_box: True
      # Optional: crop the snapshot (default: shown below)
      crop: True
      # Optional: height to resize the snapshot to (default: shown below)
      height: 270
      # Optional: jpeg encode quality (default: shown below)
      quality: 70
      # Optional: Restrict mqtt messages to objects that entered any of the listed zones (default: no required zones)
      required_zones: []

    # Optional: Configuration for how camera is handled in the GUI.
    ui:
      # Optional: Adjust sort order of cameras in the UI. Larger numbers come later (default: shown below)
      # By default the cameras are sorted alphabetically.
      order: 0
      # Optional: Whether or not to show the camera in the Frigate UI (default: shown below)
      dashboard: True

docker-compose file or Docker CLI command

n/a

Relevant log output

[s6-init] making user provided files available at /var/run/s6/etc...exited 0.
[s6-init] ensuring user provided files have correct perms...exited 0.
[fix-attrs.d] applying ownership & permissions fixes...
[fix-attrs.d] done.
[cont-init.d] executing container initialization scripts...
[cont-init.d] done.
[services.d] starting services
[services.d] done.
[2023-01-03 23:00:13] frigate.app                    INFO    : Starting Frigate (0.11.1-2eada21)
[2023-01-03 23:00:14] frigate.app                    INFO    : Creating directory: /tmp/cache
Starting migrations
[2023-01-03 23:00:14] peewee_migrate                 INFO    : Starting migrations
There is nothing to migrate
[2023-01-03 23:00:14] peewee_migrate                 INFO    : There is nothing to migrate
[2023-01-03 23:00:14] frigate.app                    INFO    : Output process started: 216
[2023-01-03 23:00:14] frigate.app                    INFO    : Camera processor started for garage_n_gate_cam: 219
[2023-01-03 23:00:14] detector.coral                 INFO    : Starting detection process: 215
[2023-01-03 23:00:14] frigate.edgetpu                INFO    : Attempting to load TPU as usb
[2023-01-03 23:00:14] frigate.app                    INFO    : Capture process started for garage_n_gate_cam: 220
[2023-01-03 23:00:14] ws4py                          INFO    : Using epoll
[2023-01-03 23:00:14] ws4py                          INFO    : Using epoll
Process detector:coral:
[2023-01-03 23:00:39] frigate.edgetpu                ERROR   : No EdgeTPU was detected. If you do not have a Coral device yet, you must configure CPU detectors.
Traceback (most recent call last):
  File "/usr/lib/python3/dist-packages/tflite_runtime/interpreter.py", line 160, in load_delegate
    delegate = Delegate(library, options)
  File "/usr/lib/python3/dist-packages/tflite_runtime/interpreter.py", line 119, in __init__
    raise ValueError(capture.message)
ValueError
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
  File "/usr/lib/python3.9/multiprocessing/process.py", line 315, in _bootstrap
    self.run()
  File "/usr/lib/python3.9/multiprocessing/process.py", line 108, in run
    self._target(*self._args, **self._kwargs)
  File "/opt/frigate/frigate/edgetpu.py", line 135, in run_detector
    object_detector = LocalObjectDetector(
  File "/opt/frigate/frigate/edgetpu.py", line 43, in __init__
    edge_tpu_delegate = load_delegate("libedgetpu.so.1.0", device_config)
  File "/usr/lib/python3/dist-packages/tflite_runtime/interpreter.py", line 162, in load_delegate
    raise ValueError('Failed to load delegate from {}\n{}'.format(
ValueError: Failed to load delegate from libedgetpu.so.1.0
[2023-01-03 23:00:44] frigate.watchdog               INFO    : Detection appears to have stopped. Exiting frigate...
[2023-01-03 23:00:44] frigate.app                    INFO    : Stopping...
[2023-01-03 23:00:44] ws4py                          INFO    : Closing all websockets with [1001] 'Server is shutting down'
[2023-01-03 23:00:44] frigate.record                 INFO    : Exiting recording cleanup...
[2023-01-03 23:00:44] frigate.events                 INFO    : Exiting event cleanup...
[2023-01-03 23:00:44] frigate.stats                  INFO    : Exiting watchdog...
[2023-01-03 23:00:44] frigate.watchdog               INFO    : Exiting watchdog...
[2023-01-03 23:00:44] frigate.record                 INFO    : Exiting recording maintenance...
[2023-01-03 23:00:44] frigate.events                 INFO    : Exiting event processor...
[2023-01-03 23:00:47] frigate.object_processing      INFO    : Exiting object processor...
[2023-01-03 23:00:47] peewee.sqliteq                 INFO    : writer received shutdown request, exiting.
/usr/lib/python3.9/multiprocessing/resource_tracker.py:216: UserWarning: resource_tracker: There appear to be 5 leaked shared_memory objects to clean up at shutdown
  warnings.warn('resource_tracker: There appear to be %d '
[cmd] python3 exited 0
[cont-finish.d] executing container finish scripts...
[cont-finish.d] done.
[s6-finish] waiting for services.
[s6-finish] sending all processes the TERM signal.

Operating system

HassOS

Install method

HassOS Addon

Coral version

USB

Any other information that may be helpful

thank you happy new year

NickM-27 commented 1 year ago

we've seen the yellow having issues, it is usually power delivery related, are you using an externally powered hub?

Richfa commented 1 year ago

thank you for rapid responce, i just plugged the USB into the home assistant yellow? is there another way to do it sorry I am still learning

NickM-27 commented 1 year ago

we highly recommend getting a USB hub which has its own power source, the coral uses a lot of power and can be too mch for low power devices like a Pi

Richfa commented 1 year ago

how will the usb hub connect to the home assistant yellow?

Richfa commented 1 year ago

is there a way to increase the voltage? the pi 4 before was running beautifully

NickM-27 commented 1 year ago

A usb hub has a dedicated port to connect to the host device.

That would be a question for the manufacturers of the yellow

Richfa commented 1 year ago

thank you i will get a usb hub thank you for your help

github-actions[bot] commented 1 year ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.