blakeblackshear / frigate

NVR with realtime local object detection for IP cameras
https://frigate.video
MIT License
19.07k stars 1.74k forks source link

[HW Accel Support]: Unable to pass gpu through virtualbox on windows #3736

Closed malosaa closed 2 years ago

malosaa commented 2 years ago

Describe the problem you are having

I did try all the ones in the manual and don't work.

Version

2022.8.7

Frigate config file

mqtt:

  host: 192.168.1.12
  port: 1883
  user: XXXX
  password: XXXXXXXXXXXXXX
  topic_prefix: frigate3
  client_id: frigate3
  stats_interval: 60

# Optional: Detectors configuration. Defaults to a single CPU detector
detectors:
  coral:
    # Required: type of the detector
    # Valid values are 'edgetpu' (requires device property below) and 'cpu'.
    type: edgetpu
    # Optional: device name as defined here: https://coral.ai/docs/edgetpu/multiple-edgetpu/#using-the-tensorflow-lite-python-api
    device: usb
    # Optional: num_threads value passed to the tflite.Interpreter (default: shown below)
    # This value is only used for CPU types
    # num_threads: 3
#  cpu1:
#    type: cpu
#    num_threads: 3
#  cpu2:
#    type: cpu
#    num_threads: 3
rtmp:
  enabled: false

# Optional: Database configuration
database:
  # The path to store the SQLite DB (default: shown below)
  path: /db/frigate.db

# Optional: birdseye configuration
birdseye:
  # Optional: Enable birdseye view (default: shown below)
  enabled: True
  # Optional: Width of the output resolution (default: shown below)
  width: 1280
  # Optional: Height of the output resolution (default: shown below)
  height: 720
  # Optional: Encoding quality of the mpeg1 feed (default: shown below)
  # 1 is the highest quality, and 31 is the lowest. Lower quality feeds utilize less CPU resources.
  quality: 8
  # Optional: Mode of the view. Available options are: objects, motion, and continuous
  #   objects - cameras are included if they have had a tracked object within the last 30 seconds
  #   motion - cameras are included if motion was detected in the last 30 seconds
  #   continuous - all cameras are included always
  mode: objects

# Optional: ffmpeg configuration
ffmpeg:
  hwaccel_args: []
  # Optional: global input args (default: shown below)
  output_args:
    # Optional: output args for record streams (default: shown below)
    record: -f segment -segment_time 10 -segment_format mp4 -reset_timestamps 1 -strftime 1 -c:v copy -c:a aac

# Optional: Detect configuration
# NOTE: Can be overridden at the camera level
detect:
  # Optional: width of the frame for the input with the detect role (default: shown below)
  width: 1024
  # Optional: height of the frame for the input with the detect role (default: shown below)
  height: 720
  # Optional: desired fps for your camera for the input with the detect role (default: shown below)
  # NOTE: Recommended value of 5. Ideally, try and reduce your FPS on the camera.
  fps: 5
  # Optional: enables detection for the camera (default: True)
  # This value can be set via MQTT and will be updated in startup based on retained value
  enabled: True
  # Optional: Number of frames without a detection before frigate considers an object to be gone. (default: 5x the frame rate)
  max_disappeared: 25
  # Optional: Configuration for stationary object tracking
objects:
  # Optional: list of objects to track from labelmap.txt (default: shown below)
  track:
    - person
    - dog
    - bicycle
    - bird
    - sheep
  # Optional: mask to prevent all object types from being detected in certain areas (default: no mask)
  # Checks based on the bottom center of the bounding box of the object.
  # NOTE: This mask is COMBINED with the object type specific mask below
  # mask: 0,0,1000,0,1000,200,0,200
  # Optional: filters to reduce false positives for specific object types
  filters:
    person:
      # Optional: minimum width*height of the bounding box for the detected object (default: 0)
      min_area: 5000
      # Optional: maximum width*height of the bounding box for the detected object (default: 24000000)
      max_area: 100000
      # Optional: minimum score for the object to initiate tracking (default: shown below)
      min_score: 0.5
      # Optional: minimum decimal percentage for tracked object's computed score to be considered a true positive (default: shown below)
      threshold: 0.7
      # Optional: mask to prevent this object type from being detected in certain areas (default: no mask)
      # Checks based on the bottom center of the bounding box of the object
      mask: 0,0,1000,0,1000,200,0,200

# Optional: Motion configuration
# NOTE: Can be overridden at the camera level
motion:
  # Optional: The threshold passed to cv2.threshold to determine if a pixel is different enough to be counted as motion. (default: shown below)
  # Increasing this value will make motion detection less sensitive and decreasing it will make motion detection more sensitive.
  # The value should be between 1 and 255.
  threshold: 15
  # Optional: Minimum size in pixels in the resized motion image that counts as motion (default: 30)
  # Increasing this value will prevent smaller areas of motion from being detected. Decreasing will
  # make motion detection more sensitive to smaller moving objects.
  # As a rule of thumb:
  #  - 15 - high sensitivity
  #  - 30 - medium sensitivity
  #  - 50 - low sensitivity
  contour_area: 30
  # Optional: Alpha value passed to cv2.accumulateWeighted when averaging the motion delta across multiple frames (default: shown below)
  # Higher values mean the current frame impacts the delta a lot, and a single raindrop may register as motion.
  # Too low and a fast moving person wont be detected as motion.
#  delta_alpha: 0.2
  # Optional: Alpha value passed to cv2.accumulateWeighted when averaging frames to determine the background (default: shown below)
  # Higher values mean the current frame impacts the average a lot, and a new object will be averaged into the background faster.
  # Low values will cause things like moving shadows to be detected as motion for longer.
  # https://www.geeksforgeeks.org/background-subtraction-in-an-image-using-concept-of-running-average/
  frame_alpha: 0.2
  # Optional: Height of the resized motion frame  (default: 50)
  # This operates as an efficient blur alternative. Higher values will result in more granular motion detection at the expense
  # of higher CPU usage. Lower values result in less CPU, but small changes may not register as motion.
  frame_height: 50
  # Optional: motion mask
  # NOTE: see docs for more detailed info on creating masks
  mask: -0,653,0,662,0,698,0,720,147,720,335,720,0,481
  # Optional: improve contrast (default: shown below)
  # Enables dynamic contrast improvement. This should help improve night detections at the cost of making motion detection more sensitive
  # for daytime.
  improve_contrast: False

# Optional: Record configuration
# NOTE: Can be overridden at the camera level
record:
  # Optional: Enable recording (default: shown below)
  # WARNING: If recording is disabled in the config, turning it on via
  #          the UI or MQTT later will have no effect.
  # WARNING: Frigate does not currently support limiting recordings based
  #          on available disk space automatically. If using recordings,
  #          you must specify retention settings for a number of days that
  #          will fit within the available disk space of your drive or Frigate
  #          will crash.
  enabled: True
  # Optional: Number of minutes to wait between cleanup runs (default: shown below)
  # This can be used to reduce the frequency of deleting recording segments from disk if you want to minimize i/o
  expire_interval: 15
  # Optional: Retention settings for recording
  retain:
    # Optional: Number of days to retain recordings regardless of events (default: shown below)
    # NOTE: This should be set to 0 and retention should be defined in events section below
    #       if you only want to retain recordings of events.
    days: 0
    # Optional: Mode for retention. Available options are: all, motion, and active_objects
    #   all - save all recording segments regardless of activity
    #   motion - save all recordings segments with any detected motion
    #   active_objects - save all recording segments with active/moving objects
    # NOTE: this mode only applies when the days setting above is greater than 0
    mode: all
  # Optional: Event recording settings
  events:
    # Optional: Number of seconds before the event to include (default: shown below)
    pre_capture: 10
    # Optional: Number of seconds after the event to include (default: shown below)
    post_capture: 10
    # Optional: Objects to save recordings for. (default: all tracked objects)
    objects:
      - person
    # Optional: Restrict recordings to objects that entered any of the listed zones (default: no required zones)
    required_zones: []
    # Optional: Retention settings for recordings of events
    retain:
      # Required: Default retention days (default: shown below)
      default: 10
      # Optional: Mode for retention. (default: shown below)
      #   all - save all recording segments for events regardless of activity
      #   motion - save all recordings segments for events with any detected motion
      #   active_objects - save all recording segments for event with active/moving objects
      #
      # NOTE: If the retain mode for the camera is more restrictive than the mode configured
      #       here, the segments will already be gone by the time this mode is applied.
      #       For example, if the camera retain mode is "motion", the segments without motion are
      #       never stored, so setting the mode to "all" here won't bring them back.
      mode: motion
      # Optional: Per object retention days
      objects:
        person: 15

# Optional: Configuration for the jpg snapshots written to the clips directory for each event
# NOTE: Can be overridden at the camera level
snapshots:
  # Optional: Enable writing jpg snapshot to /media/frigate/clips (default: shown below)
  # This value can be set via MQTT and will be updated in startup based on retained value
  enabled: True
  # Optional: print a timestamp on the snapshots (default: shown below)
  timestamp: False
  # Optional: draw bounding box on the snapshots (default: shown below)
  bounding_box: True
  # Optional: crop the snapshot (default: shown below)
  retain:
    # Required: Default retention days (default: shown below)
    default: 10

# Optional: RTMP configuration
# NOTE: Can be overridden at the camera level
rtmp:
  # Optional: Enable the RTMP stream (default: True)
  enabled: True

# Optional: Live stream configuration for WebUI
# NOTE: Can be overridden at the camera level
live:
  # Optional: Set the height of the live stream. (default: 720)
  # This must be less than or equal to the height of the detect stream. Lower resolutions
  # reduce bandwidth required for viewing the live stream. Width is computed to match known aspect ratio.
  height: 720
  # Optional: Set the encode quality of the live stream (default: shown below)
  # 1 is the highest quality, and 31 is the lowest. Lower quality feeds utilize less CPU resources.
  quality: 8

# Optional: in-feed timestamp style configuration
# NOTE: Can be overridden at the camera level
timestamp_style:
  # Optional: Position of the timestamp (default: shown below)
  #           "tl" (top left), "tr" (top right), "bl" (bottom left), "br" (bottom right)
  position: "tl"
  # Optional: Format specifier conform to the Python package "datetime" (default: shown below)
  #           Additional Examples:
  #             german: "%d.%m.%Y %H:%M:%S"
  format: "%m/%d/%Y %H:%M:%S"
  # Optional: Color of font
  color:
    # All Required when color is specified (default: shown below)
    red: 255
    green: 255
    blue: 255
  # Optional: Line thickness of font (default: shown below)
  thickness: 2
  # Optional: Effect of lettering (default: shown below)
  #           None (No effect),
  #           "solid" (solid background in inverse color of font)
  #           "shadow" (shadow for font)
  effect: solid

# Required
cameras:
  # Required: name of the camera
  driveway:
    # Required: ffmpeg settings for the camera
    ffmpeg:
      # Required: A list of input streams for the camera. See documentation for more information.
      inputs:
        - path: rtsp://192.168.1.110:554/XXXXXXXXXXXXXXXstream=1.sdp?real_stream
          roles:
            - detect
            - rtmp
        - path: rtsp://192.168.1.110:554/XXXXXXXXXXXXXXXXXXstream=0.sdp?real_stream
          roles:
            - record
            # - clips
    best_image_timeout: 60

    # Optional: zones for this camera
    zones:
      # Required: name of the zone
      # NOTE: This must be different than any camera names, but can match with another zone on another
      #       camera.
      drivewayclose_in:
        coordinates: 164,708,761,720,825,354,443,283,29,587,54,595
      peoplezone:
        coordinates: 965,336,0,200,0,0,835,0,1024,0,1023,138,985,366
      Front_stones:
        coordinates: 426,280,29,565,0,261,86,226
        # Optional: List of objects that can trigger this zone (default: all tracked objects)
        # Optional: Zone level object filters.
        # NOTE: The global and camera filters are applied upstream.

docker-compose file or Docker CLI command

nope

Relevant log output

nope

FFprobe output from your camera

nope

Operating system

Windows

Install method

HassOS Addon

Network connection

Wired

Camera make and model

BESDER ONVIF

Any other information that may be helpful

image

NickM-27 commented 2 years ago

What CPU do you have? odds are it didn't work because you didn't pass the iGPU to the virtual box VM

malosaa commented 2 years ago

iGPU to the virtual box VM

How do i do that?

this is my cpu, image

im on windows 11, and probably doesn't support iGPU passtrough

NickM-27 commented 2 years ago

I do not know how to pass it through to virtualbox.

But another thing that likely tricked you up is that you have an AMD GPU. Did you set the LIBVA_DRIVER_NAME environment variable?

NickM-27 commented 2 years ago

Doing some googling the passthrough may not be required, odds are you didn't set the environment variable

malosaa commented 2 years ago

Doing some googling the passthrough may not be required, odds are you didn't set the environment variable

I did check, for windows 11 its not possible to do a passtrough,

And the environment variable doesn't work, it throws a error saying the config is not valid.

malosaa commented 2 years ago

Config Validation Errors


1 validation error for FrigateConfig environment_vars value is not a valid dict (type=type_error.dict)

NickM-27 commented 2 years ago

And the environment variable doesn't work, it throws a error saying the config is not valid.

Then you likely defined it incorrectly as it does work, that is what it is designed to do.

NickM-27 commented 2 years ago

Config Validation Errors

1 validation error for FrigateConfig environment_vars value is not a valid dict (type=type_error.dict)

Please include your config with how you defined it

malosaa commented 2 years ago
mqtt:

  host: 192.168.1.12
  port: 1883
  user: XXXXXX
  password: XXXXX
  topic_prefix: frigate3
  client_id: frigate3
  stats_interval: 60

detectors:
  # Required: name of the detector
  coral:
    type: edgetpu
    device: usb
    num_threads: 3
rtmp:
  enabled: true

# Optional: Database configuration
database:
  # The path to store the SQLite DB (default: shown below)
 # path: /db/frigate.db
  path: /media/frigate/frigate.db

# Optional: ffmpeg configuration
ffmpeg:
  hwaccel_args:
    - -hwaccel
    - vaapi
    - -hwaccel_device
    - /dev/dri/renderD128
  output_args:
    # Optional: output args for record streams (default: shown below)
    record: -f segment -segment_time 10 -segment_format mp4 -reset_timestamps 1 -strftime 1 -c:v copy -c:a aac

# Optional: Detect configuration
# NOTE: Can be overridden at the camera level
detect:
  # Optional: width of the frame for the input with the detect role (default: shown below)
 # width: 1024
  # Optional: height of the frame for the input with the detect role (default: shown below)
  #height: 720
  # Optional: desired fps for your camera for the input with the detect role (default: shown below)
  # NOTE: Recommended value of 5. Ideally, try and reduce your FPS on the camera.
  # fps: 5
  # Optional: enables detection for the camera (default: True)
  # This value can be set via MQTT and will be updated in startup based on retained value
  enabled: True
  # Optional: Number of frames without a detection before frigate considers an object to be gone. (default: 5x the frame rate)
  max_disappeared: 25
  # Optional: Configuration for stationary object tracking
objects:
  # Optional: list of objects to track from labelmap.txt (default: shown below)
  track:
    - person
    - dog
    - bicycle
    - bird
    - sheep
  # Optional: mask to prevent all object types from being detected in certain areas (default: no mask)
  # Checks based on the bottom center of the bounding box of the object.
  # NOTE: This mask is COMBINED with the object type specific mask below
  # mask: 0,0,1000,0,1000,200,0,200
  # Optional: filters to reduce false positives for specific object types
  filters:
    person:
      # Optional: minimum width*height of the bounding box for the detected object (default: 0)
      min_area: 5000
      # Optional: maximum width*height of the bounding box for the detected object (default: 24000000)
      max_area: 100000
      # Optional: minimum score for the object to initiate tracking (default: shown below)
      min_score: 0.5
      # Optional: minimum decimal percentage for tracked object's computed score to be considered a true positive (default: shown below)
      threshold: 0.7
      # Optional: mask to prevent this object type from being detected in certain areas (default: no mask)
      # Checks based on the bottom center of the bounding box of the object
      mask: 0,0,1000,0,1000,200,0,200

# Optional: Motion configuration
# NOTE: Can be overridden at the camera level
motion:
  # Optional: The threshold passed to cv2.threshold to determine if a pixel is different enough to be counted as motion. (default: shown below)
  # Increasing this value will make motion detection less sensitive and decreasing it will make motion detection more sensitive.
  # The value should be between 1 and 255.
  threshold: 15
  # Optional: Minimum size in pixels in the resized motion image that counts as motion (default: 30)
  # Increasing this value will prevent smaller areas of motion from being detected. Decreasing will
  # make motion detection more sensitive to smaller moving objects.
  # As a rule of thumb:
  #  - 15 - high sensitivity
  #  - 30 - medium sensitivity
  #  - 50 - low sensitivity
  contour_area: 30
  # Optional: Alpha value passed to cv2.accumulateWeighted when averaging the motion delta across multiple frames (default: shown below)
  # Higher values mean the current frame impacts the delta a lot, and a single raindrop may register as motion.
  # Too low and a fast moving person wont be detected as motion.
#  delta_alpha: 0.2
  # Optional: Alpha value passed to cv2.accumulateWeighted when averaging frames to determine the background (default: shown below)
  # Higher values mean the current frame impacts the average a lot, and a new object will be averaged into the background faster.
  # Low values will cause things like moving shadows to be detected as motion for longer.
  # https://www.geeksforgeeks.org/background-subtraction-in-an-image-using-concept-of-running-average/
  frame_alpha: 0.2
  # Optional: Height of the resized motion frame  (default: 50)
  # This operates as an efficient blur alternative. Higher values will result in more granular motion detection at the expense
  # of higher CPU usage. Lower values result in less CPU, but small changes may not register as motion.
  frame_height: 50
  # Optional: motion mask
  # NOTE: see docs for more detailed info on creating masks
  mask: -0,653,0,662,0,698,0,720,147,720,335,720,0,481
  # Optional: improve contrast (default: shown below)
  # Enables dynamic contrast improvement. This should help improve night detections at the cost of making motion detection more sensitive
  # for daytime.
  improve_contrast: False

# Optional: Record configuration
# NOTE: Can be overridden at the camera level
record:
  # Optional: Enable recording (default: shown below)
  # WARNING: If recording is disabled in the config, turning it on via
  #          the UI or MQTT later will have no effect.
  # WARNING: Frigate does not currently support limiting recordings based
  #          on available disk space automatically. If using recordings,
  #          you must specify retention settings for a number of days that
  #          will fit within the available disk space of your drive or Frigate
  #          will crash.
  enabled: True
  # Optional: Number of minutes to wait between cleanup runs (default: shown below)
  # This can be used to reduce the frequency of deleting recording segments from disk if you want to minimize i/o
  expire_interval: 15
  # Optional: Retention settings for recording
  retain:
    # Optional: Number of days to retain recordings regardless of events (default: shown below)
    # NOTE: This should be set to 0 and retention should be defined in events section below
    #       if you only want to retain recordings of events.
    days: 0
    # Optional: Mode for retention. Available options are: all, motion, and active_objects
    #   all - save all recording segments regardless of activity
    #   motion - save all recordings segments with any detected motion
    #   active_objects - save all recording segments with active/moving objects
    # NOTE: this mode only applies when the days setting above is greater than 0
    mode: all
  # Optional: Event recording settings
  events:
    # Optional: Number of seconds before the event to include (default: shown below)
    pre_capture: 10
    # Optional: Number of seconds after the event to include (default: shown below)
#    post_capture: 10
    # Optional: Objects to save recordings for. (default: all tracked objects)
    objects:
      - person
    # Optional: Restrict recordings to objects that entered any of the listed zones (default: no required zones)
    required_zones: []
    # Optional: Retention settings for recordings of events
    retain:
      # Required: Default retention days (default: shown below)
      default: 10
      mode: motion
      # Optional: Per object retention days
      objects:
        person: 15

# Optional: Configuration for the jpg snapshots written to the clips directory for each event
# NOTE: Can be overridden at the camera level
snapshots:
  # Optional: Enable writing jpg snapshot to /media/frigate/clips (default: shown below)
  # This value can be set via MQTT and will be updated in startup based on retained value
  enabled: True
  # Optional: print a timestamp on the snapshots (default: shown below)
  timestamp: False
  # Optional: draw bounding box on the snapshots (default: shown below)
  bounding_box: True
  # Optional: crop the snapshot (default: shown below)
  retain:
    # Required: Default retention days (default: shown below)
    default: 10

# Optional: RTMP configuration
# NOTE: Can be overridden at the camera level
rtmp:
  # Optional: Enable the RTMP stream (default: True)
  enabled: True

# Optional: Live stream configuration for WebUI
# NOTE: Can be overridden at the camera level
live:
  # Optional: Set the height of the live stream. (default: 720)
  # This must be less than or equal to the height of the detect stream. Lower resolutions
  # reduce bandwidth required for viewing the live stream. Width is computed to match known aspect ratio.
  height: 720
  # Optional: Set the encode quality of the live stream (default: shown below)
  # 1 is the highest quality, and 31 is the lowest. Lower quality feeds utilize less CPU resources.
  quality: 8

# Optional: in-feed timestamp style configuration
# NOTE: Can be overridden at the camera level
timestamp_style:
  # Optional: Position of the timestamp (default: shown below)
  #           "tl" (top left), "tr" (top right), "bl" (bottom left), "br" (bottom right)
  position: "tl"
  # Optional: Format specifier conform to the Python package "datetime" (default: shown below)
  #           Additional Examples:
  #             german: "%d.%m.%Y %H:%M:%S"
  format: "%m/%d/%Y %H:%M:%S"
  # Optional: Color of font
  color:
    # All Required when color is specified (default: shown below)
    red: 255
    green: 255
    blue: 255
  # Optional: Line thickness of font (default: shown below)
  thickness: 2
  # Optional: Effect of lettering (default: shown below)
  #           None (No effect),
  #           "solid" (solid background in inverse color of font)
  #           "shadow" (shadow for font)
  effect: solid

# Required
cameras:
  # Required: name of the camera
  driveway:
    # Required: ffmpeg settings for the camera
    ffmpeg:
      # Required: A list of input streams for the camera. See documentation for more information.
      inputs:
        - path: rtsp://192.168.1.110:554/XXXXX
          roles:
            - detect
            - rtmp
        - path: rtsp://192.168.1.110:554/XXXXXX
          roles:
            - record
            # - clips
    best_image_timeout: 60

    # Optional: zones for this camera
    zones:
      # Required: name of the zone
      # NOTE: This must be different than any camera names, but can match with another zone on another
      #       camera.
      drivewayclose_in:
        coordinates: 150,720,761,720,830,280,424,212,29,517,0,547
      peoplezone:
        coordinates: 965,336,0,200,0,0,835,0,1024,0,1023,138,985,366
      Front_stones:
        coordinates: 414,216,32,534,0,544,0,172,60,150
        # Optional: List of objects that can trigger this zone (default: all tracked objects)
        # Optional: Zone level object filters.
        # NOTE: The global and camera filters are applied upstream.

  back_garden:
    # Required: ffmpeg settings for the camera
    ffmpeg:
      # Required: A list of input streams for the camera. See documentation for more information.
      inputs:
        - path: rtsp://192.168.1.112:554/XXXXXXX
          roles:
            - detect
            - rtmp
        - path: rtsp://192.168.1.112:554/XXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
          roles:
            - record
            # - clips
    best_image_timeout: 60

    # Optional: zones for this camera
    zones:
      # Required: name of the zone
      # NOTE: This must be different than any camera names, but can match with another zone on another
      #       camera.
      drivewayclose_in:
        coordinates: 150,720,761,720,830,280,424,212,29,517,0,547
      peoplezone:
        coordinates: 965,336,0,200,0,0,835,0,1024,0,1023,138,985,366
      Front_stones:
        coordinates: 414,216,32,534,0,544,0,172,60,150
        # Optional: List of objects that can trigger this zone (default: all tracked objects)
        # Optional: Zone level object filters.
        # NOTE: The global and camera filters are applied upstream.

environment_vars:
  LIBVA_DRIVER_NAME=radeonsi
NickM-27 commented 2 years ago

environment_vars is a list so each item needs to be a map

environment_vars:
  LIBVA_DRIVER_NAME: radeonsi
malosaa commented 2 years ago

environment_vars is a list so each item needs to be a map

environment_vars:
  LIBVA_DRIVER_NAME: radeonsi

getting error again,

*************************************************************
***    Config Validation Errors                           ***
*************************************************************
1 validation error for FrigateConfig
environment_vars
  value is not a valid dict (type=type_error.dict)
Traceback (most recent call last):
  File "/opt/frigate/frigate/app.py", line 312, in start
    self.init_config()
  File "/opt/frigate/frigate/app.py", line 77, in init_config
    user_config = FrigateConfig.parse_file(config_file)
  File "/opt/frigate/frigate/config.py", line 904, in parse_file
    return cls.parse_obj(config)
  File "pydantic/main.py", line 511, in pydantic.main.BaseModel.parse_obj
  File "pydantic/main.py", line 331, in pydantic.main.BaseModel.__init__
pydantic.error_wrappers.ValidationError: 1 validation error for FrigateConfig
environment_vars
  value is not a valid dict (type=type_error.dict)
*************************************************************
***    End Config Validation Errors                       ***
*************************************************************
[cmd] python3 exited 1
[cont-finish.d] executing container finish scripts...
[cont-finish.d] done.
[s6-finish] waiting for services.
NickM-27 commented 2 years ago

I updated my comment a couple times, did you enter it as I have it now?

malosaa commented 2 years ago

Yep that did fix the error, but i get green screens now.

[2022-08-30 15:51:02] frigate.video ERROR : back_garden: Unable to read frames from ffmpeg process.

NickM-27 commented 2 years ago

Yep that did fix the error, but i get green screens now.

Okay, please paste the entire section of logs

malosaa commented 2 years ago
[2022-08-30 15:51:02] frigate.video                  ERROR   : back_garden: Unable to read frames from ffmpeg process.
[2022-08-30 15:51:02] frigate.video                  ERROR   : back_garden: Unable to read frames from ffmpeg process.
[2022-08-30 15:51:02] frigate.video                  ERROR   : back_garden: Unable to read frames from ffmpeg process.
[2022-08-30 15:51:02] frigate.video                  ERROR   : back_garden: Unable to read frames from ffmpeg process.
[2022-08-30 15:51:02] frigate.video                  ERROR   : back_garden: Unable to read frames from ffmpeg process.
[2022-08-30 15:51:02] frigate.video                  ERROR   : back_garden: Unable to read frames from ffmpeg process.
[2022-08-30 15:51:02] frigate.video                  ERROR   : back_garden: Unable to read frames from ffmpeg process.
[2022-08-30 15:51:02] frigate.video                  ERROR   : back_garden: Unable to read frames from ffmpeg process.
[2022-08-30 15:51:02] frigate.video                  ERROR   : back_garden: Unable to read frames from ffmpeg process.
[2022-08-30 15:51:02] frigate.video                  ERROR   : back_garden: Unable to read frames from ffmpeg process.
[2022-08-30 15:51:02] frigate.video                  ERROR   : back_garden: Unable to read frames from ffmpeg process.
[2022-08-30 15:51:02] frigate.video                  ERROR   : back_garden: Unable to read frames from ffmpeg process.
[2022-08-30 15:51:02] frigate.video                  ERROR   : back_garden: Unable to read frames from ffmpeg process.
[2022-08-30 15:51:02] frigate.video                  ERROR   : back_garden: Unable to read frames from ffmpeg process.
[2022-08-30 15:51:02] frigate.video                  ERROR   : back_garden: Unable to read frames from ffmpeg process.
[2022-08-30 15:51:02] frigate.video                  ERROR   : back_garden: ffmpeg process is not running. exiting capture thread...
[2022-08-30 15:51:03] frigate.video                  ERROR   : driveway: Unable to read frames from ffmpeg process.
[2022-08-30 15:51:03] frigate.video                  ERROR   : driveway: ffmpeg process is not running. exiting capture thread...
[2022-08-30 15:51:10] watchdog.driveway              ERROR   : Ffmpeg process crashed unexpectedly for driveway.
[2022-08-30 15:51:10] watchdog.driveway              ERROR   : The following ffmpeg logs include the last 100 lines prior to exit.
[2022-08-30 15:51:10] ffmpeg.driveway.detect         ERROR   : Guessed Channel Layout for Input Stream #0.1 : mono
[2022-08-30 15:51:10] ffmpeg.driveway.detect         ERROR   : [AVHWDeviceContext @ 0x55c8dd6c44c0] No VA display found for device /dev/dri/renderD128.
[2022-08-30 15:51:10] ffmpeg.driveway.detect         ERROR   : Device creation failed: -22.
[2022-08-30 15:51:10] ffmpeg.driveway.detect         ERROR   : [h264 @ 0x55c8dd6c7a80] No device available for decoder: device type vaapi needed for codec h264.
[2022-08-30 15:51:10] ffmpeg.driveway.detect         ERROR   : Device setup failed for decoder on input stream #0:0 : Invalid argument
[2022-08-30 15:51:11] watchdog.back_garden           ERROR   : Ffmpeg process crashed unexpectedly for back_garden.
[2022-08-30 15:51:11] watchdog.back_garden           ERROR   : The following ffmpeg logs include the last 100 lines prior to exit.
[2022-08-30 15:51:11] ffmpeg.back_garden.detect      ERROR   : Guessed Channel Layout for Input Stream #0.1 : mono
[2022-08-30 15:51:11] ffmpeg.back_garden.detect      ERROR   : [AVHWDeviceContext @ 0x564aafac0680] No VA display found for device /dev/dri/renderD128.
[2022-08-30 15:51:11] ffmpeg.back_garden.detect      ERROR   : Device creation failed: -22.
[2022-08-30 15:51:11] ffmpeg.back_garden.detect      ERROR   : [h264 @ 0x564aafaba600] No device available for decoder: device type vaapi needed for codec h264.
[2022-08-30 15:51:11] ffmpeg.back_garden.detect      ERROR   : Device setup failed for decoder on input stream #0:0 : Invalid argument
[2022-08-30 15:51:12] frigate.video                  ERROR   : back_garden: Unable to read frames from ffmpeg process.
[2022-08-30 15:51:12] frigate.video                  ERROR   : back_garden: ffmpeg process is not running. exiting capture thread...
[2022-08-30 15:51:13] frigate.video                  ERROR   : driveway: Unable to read frames from ffmpeg process.
[2022-08-30 15:51:13] frigate.video                  ERROR   : driveway: ffmpeg process is not running. exiting capture thread...
[2022-08-30 15:51:14] ws4py                          INFO    : Managing websocket [Local => 127.0.0.1:8082 | Remote => 127.0.0.1:45880]
[2022-08-30 15:51:21] watchdog.driveway              ERROR   : Ffmpeg process crashed unexpectedly for driveway.
[2022-08-30 15:51:21] watchdog.driveway              ERROR   : The following ffmpeg logs include the last 100 lines prior to exit.
[2022-08-30 15:51:21] ffmpeg.driveway.detect         ERROR   : Guessed Channel Layout for Input Stream #0.1 : mono
[2022-08-30 15:51:21] ffmpeg.driveway.detect         ERROR   : [AVHWDeviceContext @ 0x55d890fafe40] No VA display found for device /dev/dri/renderD128.
[2022-08-30 15:51:21] ffmpeg.driveway.detect         ERROR   : Device creation failed: -22.
[2022-08-30 15:51:21] ffmpeg.driveway.detect         ERROR   : [h264 @ 0x55d890e136c0] No device available for decoder: device type vaapi needed for codec h264.
[2022-08-30 15:51:21] ffmpeg.driveway.detect         ERROR   : Device setup failed for decoder on input stream #0:0 : Invalid argument
[2022-08-30 15:51:21] watchdog.back_garden           ERROR   : Ffmpeg process crashed unexpectedly for back_garden.
[2022-08-30 15:51:21] watchdog.back_garden           ERROR   : The following ffmpeg logs include the last 100 lines prior to exit.
[2022-08-30 15:51:21] ffmpeg.back_garden.detect      ERROR   : Guessed Channel Layout for Input Stream #0.1 : mono
[2022-08-30 15:51:21] ffmpeg.back_garden.detect      ERROR   : [AVHWDeviceContext @ 0x55f94f868680] No VA display found for device /dev/dri/renderD128.
[2022-08-30 15:51:21] ffmpeg.back_garden.detect      ERROR   : Device creation failed: -22.
[2022-08-30 15:51:21] ffmpeg.back_garden.detect      ERROR   : [h264 @ 0x55f94f860240] No device available for decoder: device type vaapi needed for codec h264.
[2022-08-30 15:51:21] ffmpeg.back_garden.detect      ERROR   : Device setup failed for decoder on input stream #0:0 : Invalid argument
[2022-08-30 15:51:22] frigate.video                  ERROR   : back_garden: Unable to read frames from ffmpeg process.
[2022-08-30 15:51:22] frigate.video                  ERROR   : back_garden: ffmpeg process is not running. exiting capture thread...
[2022-08-30 15:51:23] frigate.video                  ERROR   : driveway: Unable to read frames from ffmpeg process.
[2022-08-30 15:51:23] frigate.video                  ERROR   : driveway: ffmpeg process is not running. exiting capture thread...
[2022-08-30 15:51:31] watchdog.driveway              ERROR   : Ffmpeg process crashed unexpectedly for driveway.
[2022-08-30 15:51:31] watchdog.driveway              ERROR   : The following ffmpeg logs include the last 100 lines prior to exit.
[2022-08-30 15:51:31] ffmpeg.driveway.detect         ERROR   : Guessed Channel Layout for Input Stream #0.1 : mono
[2022-08-30 15:51:31] ffmpeg.driveway.detect         ERROR   : [AVHWDeviceContext @ 0x561b1e7cde40] No VA display found for device /dev/dri/renderD128.
[2022-08-30 15:51:31] ffmpeg.driveway.detect         ERROR   : Device creation failed: -22.
[2022-08-30 15:51:31] ffmpeg.driveway.detect         ERROR   : [h264 @ 0x561b1e7b1a80] No device available for decoder: device type vaapi needed for codec h264.
[2022-08-30 15:51:31] ffmpeg.driveway.detect         ERROR   : Device setup failed for decoder on input stream #0:0 : Invalid argument
[2022-08-30 15:51:31] watchdog.back_garden           ERROR   : Ffmpeg process crashed unexpectedly for back_garden.
[2022-08-30 15:51:31] watchdog.back_garden           ERROR   : The following ffmpeg logs include the last 100 lines prior to exit.
[2022-08-30 15:51:31] ffmpeg.back_garden.detect      ERROR   : Guessed Channel Layout for Input Stream #0.1 : mono
[2022-08-30 15:51:31] ffmpeg.back_garden.detect      ERROR   : [AVHWDeviceContext @ 0x563849707e40] No VA display found for device /dev/dri/renderD128.
[2022-08-30 15:51:31] ffmpeg.back_garden.detect      ERROR   : Device creation failed: -22.
[2022-08-30 15:51:31] ffmpeg.back_garden.detect      ERROR   : [h264 @ 0x5638496ea600] No device available for decoder: device type vaapi needed for codec h264.
[2022-08-30 15:51:31] ffmpeg.back_garden.detect      ERROR   : Device setup failed for decoder on input stream #0:0 : Invalid argument
[2022-08-30 15:51:32] frigate.video                  ERROR   : back_garden: Unable to read frames from ffmpeg process.
[2022-08-30 15:51:32] frigate.video                  ERROR   : back_garden: Unable to read frames from ffmpeg process.
[2022-08-30 15:51:32] frigate.video                  ERROR   : back_garden: Unable to read frames from ffmpeg process.
[2022-08-30 15:51:32] frigate.video                  ERROR   : back_garden: Unable to read frames from ffmpeg process.
[2022-08-30 15:51:32] frigate.video                  ERROR   : back_garden: Unable to read frames from ffmpeg process.
[2022-08-30 15:51:32] frigate.video                  ERROR   : back_garden: Unable to read frames from ffmpeg process.
[2022-08-30 15:51:32] frigate.video                  ERROR   : back_garden: ffmpeg process is not running. exiting capture thread...
[2022-08-30 15:51:33] frigate.video                  ERROR   : driveway: Unable to read frames from ffmpeg process.
[2022-08-30 15:51:33] frigate.video                  ERROR   : driveway: ffmpeg process is not running. exiting capture thread...
[2022-08-30 15:51:40] ws4py                          INFO    : Terminating websocket [Local => 127.0.0.1:8082 | Remote => 127.0.0.1:45880]
[2022-08-30 15:51:41] watchdog.driveway              ERROR   : Ffmpeg process crashed unexpectedly for driveway.
[2022-08-30 15:51:41] watchdog.driveway              ERROR   : The following ffmpeg logs include the last 100 lines prior to exit.
[2022-08-30 15:51:41] ffmpeg.driveway.detect         ERROR   : Guessed Channel Layout for Input Stream #0.1 : mono
[2022-08-30 15:51:41] ffmpeg.driveway.detect         ERROR   : [AVHWDeviceContext @ 0x56159b1e7100] No VA display found for device /dev/dri/renderD128.
[2022-08-30 15:51:41] ffmpeg.driveway.detect         ERROR   : Device creation failed: -22.
[2022-08-30 15:51:41] ffmpeg.driveway.detect         ERROR   : [h264 @ 0x56159b041bc0] No device available for decoder: device type vaapi needed for codec h264.
[2022-08-30 15:51:41] ffmpeg.driveway.detect         ERROR   : Device setup failed for decoder on input stream #0:0 : Invalid argument
[2022-08-30 15:51:41] watchdog.back_garden           ERROR   : Ffmpeg process crashed unexpectedly for back_garden.
[2022-08-30 15:51:41] watchdog.back_garden           ERROR   : The following ffmpeg logs include the last 100 lines prior to exit.
[2022-08-30 15:51:41] ffmpeg.back_garden.detect      ERROR   : Guessed Channel Layout for Input Stream #0.1 : mono
[2022-08-30 15:51:41] ffmpeg.back_garden.detect      ERROR   : [AVHWDeviceContext @ 0x55664262bd40] No VA display found for device /dev/dri/renderD128.
[2022-08-30 15:51:41] ffmpeg.back_garden.detect      ERROR   : Device creation failed: -22.
[2022-08-30 15:51:41] ffmpeg.back_garden.detect      ERROR   : [h264 @ 0x5566424f8600] No device available for decoder: device type vaapi needed for codec h264.
[2022-08-30 15:51:41] ffmpeg.back_garden.detect      ERROR   : Device setup failed for decoder on input stream #0:0 : Invalid argument
[2022-08-30 15:51:42] frigate.video                  ERROR   : back_garden: Unable to read frames from ffmpeg process.
[2022-08-30 15:51:42] frigate.video                  ERROR   : back_garden: ffmpeg process is not running. exiting capture thread...
[2022-08-30 15:51:43] frigate.video                  ERROR   : driveway: Unable to read frames from ffmpeg process.
[2022-08-30 15:51:43] frigate.video                  ERROR   : driveway: ffmpeg process is not running. exiting capture thread...
[2022-08-30 15:51:45] ws4py                          INFO    : Managing websocket [Local => 127.0.0.1:8082 | Remote => 127.0.0.1:53072]
[2022-08-30 15:51:45] ws4py                          INFO    : Terminating websocket [Local => 127.0.0.1:8082 | Remote => 127.0.0.1:53072]
[2022-08-30 15:51:48] ws4py                          INFO    : Terminating websocket [Local => 127.0.0.1:5002 | Remote => 127.0.0.1:35678]
NickM-27 commented 2 years ago

Alright, thanks, this does confirm that it can't see the iGPU.

Upon further looking the passthrough is not supported which is unfortunate: https://linustechtips.com/topic/1349409-virtualbox-gpu-passthrough/

At this point I don't think your setup will support hwaccel unfortunately :/

malosaa commented 2 years ago

Alright, thanks, this does confirm that it can't see the iGPU.

Upon further looking the passthrough is not supported which is unfortunate: https://linustechtips.com/topic/1349409-virtualbox-gpu-passthrough/

At this point I don't think your setup will support hwaccel unfortunately :/

aah thats sad, so i can't lower my cpu. Then i will need to go back to blue iris.

I thought the coral tpu would reduce the cpu ?

regards

NickM-27 commented 2 years ago

I thought the coral tpu would reduce the cpu ?

regards

The coral tpu greatly reduces cpu used for running object detection. The docs explain that it doesn't help decode video streams which is why hwaccel is 100% recommended along with a tpu

Also, the docs are clear that frigate is very much encouraged to be run outside of HASS addon in docker in a Linux install due to issues like this.

malosaa commented 2 years ago

I thought the coral tpu would reduce the cpu ? regards

The coral tpu greatly reduces cpu used for running object detection. The docs are explain that it doesn't help decode video streams which is why hwaccel is 100% recommended along with a tpu

Also, the docs are clear that frigate is very much encouraged to be run outside of HASS addon in docker in a Linux install due to issues like this.

I had run frigate in docker, but sadly i can't use the passtrough on windows 11 while using virtualbox.

anyway many thanks for all the help. I will probably sell my tpu, as i did buy it yesterday.

NickM-27 commented 2 years ago

I'll go ahead and close this issue as there is nothing for us to fix since it is an outside limitation