blakeblackshear / frigate

NVR with realtime local object detection for IP cameras
https://frigate.video
MIT License
18.88k stars 1.72k forks source link

[FR] support for nVidia accelerated detection (CUDA/TensorRT) #659

Closed speedst3r closed 1 year ago

speedst3r commented 3 years ago

TensorRT 7.2.2 (released December 2020) supports Python 3.8.

As per previous comments, this was the blocker to integrating frigate with TensorRT. As it is now supported, could we get an image that uses NVDEC and TensorRT?

blakeblackshear commented 3 years ago

Finally. I will look into it again.

speedst3r commented 3 years ago

Great, thanks for the quick response.

Release notes for reference: https://docs.nvidia.com/deeplearning/tensorrt/release-notes/tensorrt-7.html#rel_7-2-2

ril3y commented 3 years ago

Does this apply to the blocking issue for the Jetson's NVIDIA hardware too?

On Fri, Jan 29, 2021 at 8:46 AM speedst3r notifications@github.com wrote:

Great, thanks for the quick response.

Release notes for reference: https://docs.nvidia.com/deeplearning/tensorrt/release-notes/tensorrt-7.html#rel_7-2-2

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/blakeblackshear/frigate/issues/659#issuecomment-769813983, or unsubscribe https://github.com/notifications/unsubscribe-auth/AABYSMYOS72W7J5JWKOEADTS4K33FANCNFSM4WY347AQ .

MEntOMANdo commented 3 years ago

Would be great if we had cuDNN support!

jaburges commented 3 years ago

not sure if this helps - but this was a script to build openCV from Zoneminder that installs cuDNN, CUDA (the user had to download the runtimes as the nvidia license blah blah blah)


#!/bin/bash
#
#
# Script to compile opencv with CUDA support.
#
#############################################################################################################################
#
# You need to prepare for compiling the opencv with CUDA support.
#
# You need to start with a clean docker image if you are going to recompile opencv.
# This can be done by switching to "Advanced View" and clicking "Force Update", 
# or remove the Docker image then reinstall it.
# Hook processing has to be enabled to run this script.
#
# Install the Unraid Nvidia plugin and be sure your graphics card can be seen in the
# Zoneminder Docker.  This will also be checked as part of the compile process.
# You will not get a working compile if your graphics card is not seen.  It may appear
# to compile properly but will not work.
#
# The GPU architectures supported with cuda version 10.2 are all >= 3.0.
#
# Download the cuDNN run time and dev packages for your GPU configuration.  You want the deb packages for Ubuntu 18.04.
# You wll need to have an account with Nvidia to download these packages.
# https://developer.nvidia.com/rdp/form/cudnn-download-survey
# Place them in the /config/opencv/ folder.
#
CUDNN_RUN=libcudnn7_7.6.5.32-1+cuda10.2_amd64.deb
CUDNN_DEV=libcudnn7-dev_7.6.5.32-1+cuda10.2_amd64.deb
#
# Download the cuda tools package.  Unraid uses 10.2.  You want the deb package for Ubuntu 18.04.
# https://developer.nvidia.com/cuda-downloads?target_os=Linux&target_arch=x86_64&target_distro=Ubuntu&target_version=1804&target_type=deblocal
# Place the download in the /config/opencv/ folder.
#
CUDA_TOOL=cuda-repo-ubuntu1804-10-2-local-10.2.89-440.33.01_1.0-1_amd64.deb
CUDA_PIN=cuda-ubuntu1804.pin
CUDA_KEY=/var/cuda-repo-10-2-local-10.2.89-440.33.01/7fa2af80.pub
CUDA_VER=10.2
#
#
# Github URL for opencv zip file download.
# Current default is to pull the version 4.2.0 release.
#   Note: You shouldn't need to change these.
#
OPENCV_URL=https://github.com/opencv/opencv/archive/282fcb90dce76a55dc5f31246355fce2761a9eff.zip
OPENCV_CONTRIB_URL=https://github.com/opencv/opencv_contrib/archive/4.2.0.zip
#
# You can run this script in a quiet mode so it will run without any user interaction.
#
# Once you are satisfied that the compile is working, run the following command:
#   echo "yes" > opencv_ok
# 
# The opencv.sh script will run when the Docker is updated so you won't have to do it manually.
#
#############################################################################################################################

QUIET_MODE=$1
if [[ $QUIET_MODE == 'quiet' ]]; then
    QUIET_MODE='yes'
    echo "Running in quiet mode."
    sleep 10
else
    QUIET_MODE='no'
fi

#
# Display warning.
#
if [ $QUIET_MODE != 'yes' ];then
    echo "##################################################################################"
    echo
    echo "This script will compile 'opencv' with GPU support."
    echo
    echo "WARNING:"
    echo "The compile process needs 15GB of disk (Docker image) free space, at least 4GB of"
    echo "memory, and will generate a huge Zoneminder Docker that is 10GB in size!  The apt"
    echo "update will be disabled so you won't get Linux updates.  Zoneminder will no"
    echo "longer update.  In order to get updates you will have to force update, or remove"
    echo "and re-install the Zoneminder Docker and then re-compile 'opencv'."
    echo
    echo "There are several stopping points to give you a chance to see if the process is"
    echo "progressing without errors."
    echo
    echo "The compile script can take an hour or more to complete!"
    echo "Press any key to continue, or ctrl-C to stop."
    echo
    echo "##################################################################################"
    read -n 1 -s
fi

#
# Remove log files.
#
rm -f /config/opencv/*.log

#
# Be sure we have enough disk space to compile opencv.
#
SPACE_AVAIL=`/bin/df / | /usr/bin/awk '{print $4}' | grep -v 'Available'`
if [[ $((SPACE_AVAIL/1000)) -lt 15360 ]];then
    if [ $QUIET_MODE != 'yes' ];then
        echo
        echo "Not enough disk space to compile opencv!"
        echo "Expand your Docker image to leave 15GB of free space."
        echo "Force update or remove and re-install Zoneminder to allow more space if your compile did not complete."
    fi
    logger "Not enough disk space to compile opencv!" -tEventServer
    exit
fi

#
# Check for enough memory to compile opencv.
#
MEM_AVAILABLE=`cat /proc/meminfo | grep MemAvailable | /usr/bin/awk '{print $2}'`
if [[ $((MEM_AVAILABLE/1000)) -lt 4096 ]];then
    if [ $QUIET_MODE != 'yes' ];then
        echo
        echo "Not enough memory available to compile opencv!"
        echo "You should have at least 4GB available."
        echo "Check that you have not over committed SHM."
        echo "You can also stop Zoneminder to free up memory while you compile."
        echo "  service zoneminder stop"
    fi
    logger "Not enough memory available to compile opencv!" -tEventServer
    exit
fi

#
# Insure hook processing has been installed.
#
if [ "$INSTALL_HOOK" != "1" ]; then
    echo "Hook processing has to be installed before you can compile opencv!"
    exit
fi

#
# Remove hook installed opencv module and face-recognition module
#
pip3 uninstall -y opencv-contrib-python
if [ "$INSTALL_FACE" == "1" ]; then
    pip3 uninstall -y face-recognition
fi

logger "Compiling opencv with GPU Support" -tEventServer

#
# Install cuda toolkit
#
logger "Installing cuda toolkit..." -tEventServer
cd ~
if [ -f  /config/opencv/$CUDA_PIN ]; then
    cp /config/opencv/$CUDA_PIN /etc/apt/preferences.d/cuda-repository-pin-600
else
    echo "Please download CUDA_PIN."
    logger "CUDA_PIN not downloaded!" -tEventServer
    exit
fi

if [ -f /config/opencv/$CUDA_TOOL ];then
    dpkg -i /config/opencv/$CUDA_TOOL
else
    echo "Please download CUDA_TOOL package."
    logger "CUDA_TOOL package not downloaded!" -tEventServer
    exit
fi

apt-key add $CUDA_KEY >/dev/null
apt-get update
apt-get -y upgrade -o Dpkg::Options::="--force-confold"
apt-get -y install cuda-toolkit-$CUDA_VER

echo "export PATH=/usr/local/cuda/bin:$PATH" >/etc/profile.d/cuda.sh
echo "export LD_LIBRARY_PATH=/usr/local/cuda/lib64:/usr/local/cuda/extras/CUPTI/lib64:/usr/local/lib:$LD_LIBRARY_PATH" >> /etc/profile.d/cuda.sh
echo "export CUDADIR=/usr/local/cuda" >> /etc/profile.d/cuda.sh
echo "export CUDA_HOME=/usr/local/cuda" >> /etc/profile.d/cuda.sh
echo "/usr/local/cuda/lib64" > /etc/ld.so.conf.d/cuda.conf
ldconfig

#
# check for expected install location
#
CUDADIR=/usr/local/cuda-$CUDA_VER
if [ ! -d "$CUDADIR" ]; then
    echo "Failed to install cuda toolkit!"
    logger "Failed to install cuda toolkit!" -tEventServer
    exit
elif [ ! -L "/usr/local/cuda" ]; then
    ln -s $CUDADIR /usr/local/cuda
fi

logger "Cuda toolkit installed" -tEventServer

#
# Ask user to check that the GPU is seen.
#
if [ -x /usr/bin/nvidia-smi ]; then
    /usr/bin/nvidia-smi >/config/opencv/nvidia-smi.log
    if [ $QUIET_MODE != 'yes' ];then
            echo "##################################################################################"
            echo
            cat /config/opencv/nvidia-smi.log
            echo "##################################################################################"
            echo "Verify your Nvidia GPU is seen and the driver is loaded."
            echo "If not, stop the script and fix the problem."
            echo "Press any key to continue, or ctrl-C to stop."
            read -n 1 -s
    fi
else
    echo "'nvidia-smi' not found!  Check that the Nvidia drivers are installed."
    logger "'nvidia-smi' not found!  Check that the Nvidia drivers are installed." -tEventServer
fi
#
# Install cuDNN run time and dev packages
#
logger "Installing cuDNN Package..." -tEventServer
#
if [ -f /config/opencv/$CUDNN_RUN ];then
    dpkg -i /config/opencv/$CUDNN_RUN
else
    echo "Please download CUDNN_RUN package."
    logger "CUDNN_RUN package not downloaded!" -tEventServer
    exit
fi
if [ -f /config/opencv/$CUDNN_DEV ];then
    dpkg -i /config/opencv/$CUDNN_DEV
else
    echo "Please download CUDNN_DEV package."
    logger "CUDNN_DEV package not downloaded!" -tEventServer
    exit
fi
logger "cuDNN Package installed" -tEventServer

#
# Compile opencv with cuda support
#
logger "Installing cuda support packages..." -tEventServer
apt-get -y install libjpeg-dev libpng-dev libtiff-dev libavcodec-dev libavformat-dev libswscale-dev
apt-get -y install libv4l-dev libxvidcore-dev libx264-dev libgtk-3-dev libatlas-base-dev gfortran
logger "Cuda support packages installed" -tEventServer

#
# Get opencv source
#
logger "Downloading opencv source..." -tEventServer
wget -q -O opencv.zip $OPENCV_URL
wget -q -O opencv_contrib.zip $OPENCV_CONTRIB_URL
unzip opencv.zip
unzip opencv_contrib.zip
mv $(ls -d opencv-*) opencv
mv opencv_contrib-4.2.0 opencv_contrib
rm *.zip

cd ~/opencv
mkdir build
cd build
logger "Opencv source downloaded" -tEventServer

#
# Make opencv
#
logger "Compiling opencv..." -tEventServer

#
# Have user confirm that cuda and cudnn are enabled by the cmake.
#
cmake -D CMAKE_BUILD_TYPE=RELEASE \
    -D CMAKE_INSTALL_PREFIX=/usr/local \
    -D INSTALL_PYTHON_EXAMPLES=OFF \
    -D INSTALL_C_EXAMPLES=OFF \
    -D OPENCV_ENABLE_NONFREE=ON \
    -D WITH_CUDA=ON \
    -D WITH_CUDNN=ON \
    -D OPENCV_DNN_CUDA=ON \
    -D ENABLE_FAST_MATH=1 \
    -D CUDA_FAST_MATH=1 \
    -D WITH_CUBLAS=1 \
    -D OPENCV_EXTRA_MODULES_PATH=~/opencv_contrib/modules \
    -D HAVE_opencv_python3=ON \
    -D PYTHON_EXECUTABLE=/usr/bin/python3 \
    -D PYTHON2_EXECUTABLE=/usr/bin/python2 \
    -D BUILD_EXAMPLES=OFF .. >/config/opencv/cmake.log

if [ $QUIET_MODE != 'yes' ];then
    echo "######################################################################################"
    echo
    cat /config/opencv/cmake.log
    echo
    echo "######################################################################################"
    echo "Verify that CUDA and cuDNN are both enabled in the cmake output above."
    echo "Look for the lines with CUDA and cuDNN." 
    echo "You may have to scroll up the page to see them."
    echo "If those lines don't show 'YES', then stop the script and fix the problem."
    echo "Check that you have the correct versions of CUDA ond cuDNN for your GPU."
    echo "Press any key to continue, or ctrl-C to stop."
    read -n 1 -s
fi

make -j$(nproc)

logger "Installing opencv..." -tEventServer
make install
ldconfig

#
# Now reinstall face-recognition package to ensure it detects GPU.
#
if [ "$INSTALL_FACE" == "1" ]; then
    pip3 install face-recognition
fi

#
# Clean up/remove unnecessary packages
#
logger "Cleaning up..." -tEventServer

cd ~
rm -r opencv*
rm /etc/my_init.d/20_apt_update.sh

logger "Opencv compile completed" -tEventServer

if [ $QUIET_MODE != 'yes' ];then
    echo "Compile is complete."
    echo "Now check that the cv2 module in python is working."
    echo "Execute the following commands:"
    echo "  python3"
    echo "  import cv2"
    echo "  Ctrl-D to exit"
    echo
    echo "Verify that the import does not show errors."
    echo "If you don't see any errors, then you have successfully compiled opencv."
    echo
    echo "Once you are satisfied that the compile is working, run the following"
    echo "command:"
    echo "  echo "yes" > opencv_ok"
    echo
    echo "The opencv.sh script will run when the Docker is updated so you won't"
    echo "have to do it manually."
fi
 #```
MEntOMANdo commented 3 years ago

Yes, I have cuDNN + OpenCV/CUDA running on another machine, and it works well. It's going to be a challenge to package that up as a docker container, though, because when building OpenCV from source as shown above you need to provide the graphics-card-specific architecture number (-D CUDA_ARCH_BIN=x.y). I suppose you could script it; the user would need to look up their numbers first on the nVidia site. In addition, there's a conflict with drivers and the default nouveau X Window manager that has to be dealt with as well. Not sure how that would work within a docker.

stale[bot] commented 3 years ago

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

blakeblackshear commented 3 years ago

Not stale

ril3y commented 3 years ago

Thanks Blake! Few of us lurkers are still here!

On Fri, Mar 12, 2021 at 7:58 AM Blake Blackshear @.***> wrote:

Not stale

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/blakeblackshear/frigate/issues/659#issuecomment-797474067, or unsubscribe https://github.com/notifications/unsubscribe-auth/AABYSM4JZURUVBBAAHZWNSDTDH6XZANCNFSM4WY347AQ .

NicolaiOksen commented 3 years ago

Really hoping for a solution to be able to use an Nvidia GPU instead of a coral. Fingers crossed that you'll succeed Blake :)

guix77 commented 3 years ago

If I understand well we need TensorRT and TensorRT needs CUDNN.

To elaborate on https://github.com/blakeblackshear/frigate/issues/659#issuecomment-776409336, a few days ago I tested this approach on https://github.com/dlandon/zoneminder.machine.learning, which has almost the same OpenCV build script: https://github.com/dlandon/zoneminder.machine.learning/blob/master/zmeventnotification/opencv.sh. It works, but if you ever remove your container, or have another reason to create a new container, then you can just do everything again... Because the CUDNN installation and the OpenCV compilation is done in the container, not in the image build.

It's not clear to me if OpenCV would benefit from having CUDNN support in this project? If yes, we could build our own image, using blakeblackshear/frigate:stable-amd64nvidia as a base image and with our own nVidia files adapted to our GPU and provide ourselves the files downloaded from nVidia.

Something like:

./docker-compose.yml:

services:
  frigate:
    build: ./frigate/images/frigate
    container_name: frigate
    ...
    deploy:
      resources:
        reservations:
          devices:
          - capabilities: [compute, gpu, utility, video] # for ffmpeg + opencv
    ...
    volumes:
      - ./frigate/images/provisioning/libcudnn.deb:/provisioning/libcudnn.deb
      ...

./frigate/images/frigate/Dockerfile:

FROM blakeblackshear/frigate:stable-amd64nvidia
...
COPY provisioning/ /tmp
RUN dpkg -y /tmp/libcudnn.deb # and the rest of CUDNN installation, and then compile OpenCV if necessary
...

./frigate/images/frigate/provisionning/libcudnn.deb: _actually my particular libcudnn8_8.0.5.39-1+cuda11.1amd64.deb but renamed.

More or less the same logic could be applied to the TensorRT installation...

Wish it was easier!

guix77 commented 3 years ago

It seems that there could be a much easier way, look at https://github.com/DeepQuestAI/DeepStack-Base/blob/master/cuda/Dockerfile. Basically, the .deb files for CUDNN are in fact publically available! For Ubuntu 20.04: https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/

Also, there is https://hub.docker.com/r/nvidia/cuda that could be used, like Doods does.

gurkburk76 commented 3 years ago

Is this alive? :) I'de like to replace my coral that seems to drop of the buss more than i'de like with a gpu :)

ozett commented 3 years ago

jetson nano uses heavily gstreamer instead of ffmpeg, but AI should be perfect on jetson nano ... and on NVIDIA Cards. nvidia container already done... "only" tensorRT missing, i guess

pokemane commented 3 years ago

Now that Corals are almost impossible to get in the US and elsewhere (with either gigantic lead times or just straight up "out of stock"), and since TensorRT is up to Python 3.8, is this back on the table at all? #145 seemed to almost get across the finish line if not for the version issues.

ozett commented 3 years ago

Thanks for that #145 link.

Great source to get more links on jetson-nano, cuda, tensor-RT and approaches to "frigate" all this

DOODS: https://github.com/snowzach/doods WATSOR: https://github.com/asmirnou/watsor AI-PERSON-DETECTOR: https://github.com/wb666greene/AI-Person-Detector


Now that Corals are almost impossible to get in the US and elsewhere

mouser says m2. is going out go or production/EOL. Maybe time to try also beefier models with big tensorflow/Tensor-RT on jetson-nano, or empower ffmpeg with CUDA-Decoding?

image

jasonmhite commented 3 years ago

@ozett Didn't they just release the dual M.2 one not long ago? I doubt they're taking the regular M.2 one out of production, maybe just up for a refresh.

ozett commented 3 years ago

@jasonmhite The mouse-page is linked from the google-coral page when your click on the "buy-button". i was wondering aboout the EOL-information, but seems reasonable when you see that the m2 is mostly out of stock. lets wait and see what comes.

strarsis commented 2 years ago

So TensorFlow, which is used by Frigate, can use the NVIDIA GPU of my PC already? So the CPU is only used by Frigate for all the non-video/non-AI stuff? I want to load the CPU as little as possible and use the powerful GPU instead.

Ignorant bonus question here: NVIDIA Jetson embedded GPU solutions can now also be used by TensorFlow? If that is the case, why can't Frigate be just used on these? Or is there more to it than just changing some TensorFlow libraries/config to let it use the GPU/CUDA?

And what is with OpenCL and Vulcan? And OpenMP?

ozett commented 2 years ago

would love to use Nvida-gpu for tensorflow and other models, but ist not implemented yet

image


also would love to test jetson-nano performace for rtsp-decoding and tensor, but besides heavy use of gestreamer and some ffmpeg optimizations not fully supported yet https://github.com/blakeblackshear/frigate/issues/1175#issuecomment-944991978 and overall compare between coral vs. nano https://github.com/blakeblackshear/frigate/issues/2179#issuecomment-964581717

if someone jumps in and is experienced enough to help here 👍

slackr31337 commented 2 years ago

This project Watsor might be good to checkout https://github.com/asmirnou/watsor

There is support for Nvidia GPUs and Coral devices for object detection using TensorRT

https://github.com/asmirnou/watsor/blob/master/watsor/detection/tensorrt_gpu.py https://github.com/asmirnou/watsor/blob/master/watsor/detection/edge_tpu.py

Would have to rework the object detection into a more general class to support different devices.

ozett commented 2 years ago

i will also look up watsor. i tried shinobi and viseron today on the nano, but on most projects ffmpeg has no hardware-decoding support for rtsp-streams and without this the rest is no fun.

deepstream on the nano looks great, but also looks like it depends heavily on gstreamer and tensor-rt .

image

blakeblackshear commented 2 years ago

I have seen watsor, and that's similar to the planned (eventually) approach.

LordNex commented 2 years ago

So should I hold out on trying to do any install of Frigate on my Nano? I guess for now I could just use the Home Assistant add on and then just pass person detection to the Nano for DeepStack Facial Recognition

supernovae commented 2 years ago

lead times on coral are 1+year now, any update on GPU support?

blakeblackshear commented 2 years ago

There is an outstanding PR specific to the Nano that may make it possible sooner. I have heard of some users finding m.2 coral versions in stock recently on various sites.

LordNex commented 2 years ago

There is a guy one eBay and Amazon selling them for about $170. That’s how I got mine and was here in a few days. It’s the same guy in both places. If you need I can find the Amazon link but you shouldn’t have any trouble finding it.

I’m running Frigate on a RPi4 8 gig 64bit aarch64 then running DeepStack on a Nano and CompreFace and Double Take on my Home Assistant cluster. Works fairly well but I still run into encoding and decoding issues. I’m in the process of separating the video traffic to its own vlan and the. Trunking that over to the main network through NAT. I’m hoping that will work better especially once I vlan off my IoT as well. And just a suggestion if anyones looking into a new firewall. The Firewalla Gold is just a beast and has already cleaned up my network a ton. It’s a little pricy but there’s no monthly charge.

All in all the design of the network, especially if your using wireless cameras, really has a huge impact on the type of performance your going to receive. Get a few 4k cameras and you can easily overload most consumer based network equipment

On Jan 17, 2022, at 2:59 PM, Blake Blackshear @.***> wrote:



There is an outstanding PR specific to the Nano that may make it possible sooner. I have heard of some users finding m.2 coral versions in stock recently on various sites.

— Reply to this email directly, view it on GitHubhttps://github.com/blakeblackshear/frigate/issues/659#issuecomment-1014881398, or unsubscribehttps://github.com/notifications/unsubscribe-auth/ACGXNJHLWEWA434WEUCSJVTUWR7KXANCNFSM4WY347AQ. You are receiving this because you commented.Message ID: @.***>

toxuin commented 2 years ago

Coral is out of stock everywhere - any variation of it except very expensive devboards/SoMs that basically less useful than a standalone TPU.

It's all gone.

Any supported alternative to Coral would be a great blessing.

$170 for a single accelerator does not scale well when you need even 2 of them - worse if more, and becomes prohibitive if you need multiple local installs in multiple places.

jmorris644 commented 2 years ago

We have been having success with a Jetson Nano.

discussion

LordNex commented 2 years ago

Sorry if I’m not understanding. But why would you need 2 TPUs? I have 1 USB running with Frigate and it’s handling multiple streams without barely touching it.

Also, there are some catered around eBay but yes they are expensive. The only option that looks interesting is the RockPI

https://www.ebay.com/itm/ROCK-PI-3A-2-4-8GB-SBC-Rockchip-RK3568-Single-Board-Computer-Support-Coral-TPU-/284609958572?mkcid=16&mkevt=1&_trksid=p2349624.m46890.l49286&mkrid=711-127632-2357-0

It has an integrated GPU, TPU, VPU, and NPU for runningTensorFlow or other AI stacks.

My current setup is Frigate on a RPi4 8gig with a USB Coral doing the main feeds from the cameras and storing the footage on an OpenMediaVault NAS. I then have Frigate set to send its event triggers and image to a MQTT topic with a the image cropped and resized to the face. I then have DoubleTake as an add on inside Home Assistant pick that up and MQTT that to DeepStack and CompreFace running on a Jetson Nano 4gig. If both detectors come back with a score over 70% accurate, it sends actionable notifications to our phone with various buttons to unlock doors, turn on lights, or trigger the alarm. So far the setup is working very well. Just trying to get all these crappy Wyze cameras out and good PoE or 5ghz wireless cameras.

But as you can see there are some options out there but none are cheap or easy. This want something that most people could setup. If I didn’t have 25+ years working in IT I would have been lost. But keep at it and RTFM and you’ll get there.

On Feb 5, 2022, at 9:48 PM, Toxuin @.***> wrote:



Coral is out of stock everywhere - any variation of it except very expensive devboards/SoMs that basically less useful than a standalone TPU.

It's all gone.

Any supported alternative to Coral would be a great blessing.

$170 for a single accelerator does not scale well when you need even 2 of them - worse if more, and becomes prohibitive if you need multiple local installs in multiple places.

— Reply to this email directly, view it on GitHubhttps://github.com/blakeblackshear/frigate/issues/659#issuecomment-1030745614, or unsubscribehttps://github.com/notifications/unsubscribe-auth/ACGXNJESVNFRW3FBSOX3BRDUZXVPDANCNFSM4WY347AQ. You are receiving this because you commented.Message ID: @.***>

jmorris644 commented 2 years ago

Pardon my ignorance, I was looking at the RockPI board you mention and I cannot tell by the description on Ebay, does it HAVE a Coral TPU on board or is it just READY for a Coral TPU?

BTW, your current setup is exactly what I plan to move to. With the exception of the Wyze cameras of course. :)

LordNex commented 2 years ago

From what I can tell is it has an onboard coral TPU.

On Feb 6, 2022, at 9:03 AM, Joe Morris @.***> wrote:



Pardon my ignorance, I was looking at the RockPI board you mention and I cannot tell by the description on Ebay, does it HAVE Coral a TPU on board or is it just READY for a Coral TPU?

— Reply to this email directly, view it on GitHubhttps://github.com/blakeblackshear/frigate/issues/659#issuecomment-1030849889, or unsubscribehttps://github.com/notifications/unsubscribe-auth/ACGXNJAQOUSFPQ3GVKT5G2TUZ2ESHANCNFSM4WY347AQ. You are receiving this because you commented.Message ID: @.***>

LordNex commented 2 years ago

“ INTEGRATED GPU, VPU, NPU CO-PROCESSOR Integrated dual-core architecture GU, high-performance VP and high-performance NPU. GPU supports OpenGL ES3.2/2.0/1.1, Vulkan1.1; VPU can achieve 4K 60fps H.265/H.264/VP9 video decoding and 1080p 100fps H.265/H.264/VP9 video encoding; NPU supports Caffe/ One-click switching of mainstream architecture models such as Tensorflow”

From the description.

On Feb 6, 2022, at 9:03 AM, Joe Morris @.***> wrote:



Pardon my ignorance, I was looking at the RockPI board you mention and I cannot tell by the description on Ebay, does it HAVE Coral a TPU on board or is it just READY for a Coral TPU?

— Reply to this email directly, view it on GitHubhttps://github.com/blakeblackshear/frigate/issues/659#issuecomment-1030849889, or unsubscribehttps://github.com/notifications/unsubscribe-auth/ACGXNJAQOUSFPQ3GVKT5G2TUZ2ESHANCNFSM4WY347AQ. You are receiving this because you commented.Message ID: @.***>

jmorris644 commented 2 years ago

@LordNex Mind if I ask some questions about your setup?

  1. "storing the footage on an OpenMediaVault NAS." Are you doing this straight from the Frigate config file? Or sending it to MQTT and doing it there
  2. "I then have Frigate set to send its event triggers and image to a MQTT topic with a the image cropped and resized to the face." - Does Frigate do this? Or are you doing it post MQTT? Are you using node-red by any chance?
  3. "I then have DoubleTake as an add on inside Home Assistant pick that up and MQTT that to DeepStack and CompreFace running on a Jetson Nano 4gig." - I am currently not using HA, I am familiar with DeepStack but will ahve to do a little research on the other two.
  4. "If both detectors come back with a score over 70% accurate, it sends actionable notifications to our phone with various buttons to unlock doors, turn on lights, or trigger the alarm." I like the double-check process, IN my scenario I will send to the phone as well as have Alexa announce who is being seen.
  5. "So far the setup is working very well. Just trying to get all these crappy Wyze cameras out and good PoE or 5ghz wireless cameras." - I have one Wyze left. Unplugged in a box.

Thanks so much.

LordNex commented 2 years ago

Sure no problem. So the main thing that pulls all these prices together is DoubleTake

https://github.com/jakowenko/double-take

There are several setup videos and I don’t think you have to use Home Assistant, but if your getting into anything Home Asutomation I highly suggest looking into Home Assistant. It’s free, open source , and integrated with thousands of devices and software that’s on the market. Here are a few How Toos of setting up Facial Recognition with Frigate and Double Take and I’ll try and answer you questions.

https://community.home-assistant.io/t/facial-recognition-room-presence-using-double-take-frigate/290943

https://everythingsmarthome.co.uk/howto/face-recognition-just-got-easier-home-assistant-double-take-guide/

https://m.youtube.com/watch?v=5OPOAusvo8I

  1. I have all the streams being monitored by Frigate and use it’s recording and clips feature to keep a rolling 7 days of straight recording continuously to the NAS. I also have it record specific clips when an even occurs this way I don’t have to go through each hour of footage if I don’t want too. Yet it’s there incase something gets missed.

  2. Again this is a combo of Frigate, DoubleTake, and the detectors I have setup. Frigate waits until it sees movement and then checks to see if that looks like a person, if it does it puts a bounding box around the face, crops the image, and posts that to a MQTT topic. I then have DT monitor that topic and when it gets a post there it takes that image and runs it against the images I have trained. If the detector gets a hit of over a certain threshold it then triggers an automation I have built that goes through and figures out who it was detected, where they were detected, if they just got home or have been home for a while, and then does various things based upon the time of day, how much illumination is currently the house and several other things. And yes I also have it a ounce who is at the door and it’s smart enough to recognize multiple people ina group. You can even have it check if they are where if masks or not. I’m not using Node Red for any of this as I prefer to code my automations in yaml. But it could be does this way if your more comfortable using mode red.

3 and 4. I’m using 2 detectors. DeepStack running in docker compose on the Jetson Nano, and CompreFace running as an add on in Home Assistant. You don’t have to run more than one detector, but I wanted the redundancy in case ones down or busy. Not to mention it used a combined score between the two which makes it more accurate.

  1. Sell it, or install it at the in-laws or on your fish tank. Just no where you want reliability. I started with them and almost every time I needed to look at the camera, it would never come up and just plain didn’t work. Not to mention the company has pushed back away from Home Assistant and others but give the keys to the kingdom to companies like IFTTT. I don’t know how many things I’ve posted and they honestly just don’t care. I could go on and on why Wyze is a bad choice. Just trust me on this one. After trial and error I went with Amcrest AD410 doorbell units that run at 5ghz for less interference and reliability. Plus these are not battery feed, they run on the same power a normal doorbell does and you can use your original chime. And then ReoLink RCL520A PoE cameras for anywhere else. These are directly linked gigabit Power over Eithernet so you do have to run a single network cable and you’ll need either a PoE switch or a PoE injector. I picked up a Cisco 3750 48 port PoE gigabit switch with 10gbe uplinks for $150 refurbished and a 1 year warranty. If you can go that route it’s the best as cisco just flat out works and works forever. But they are not consumer equipment and you do need to know a bit to set them up. Netgear or TPlink also do an ok job at managed switches.

Hope that clears a few things up. There are some really good resources out there to walk you through it. But you’ll have to adjust for your individual setup. But hit me back if you need any help or have any questions.

On Feb 6, 2022, at 11:41 AM, Joe Morris @.***> wrote:



@LordNexhttps://github.com/LordNex Mind if I ask some questions about your setup?

  1. "storing the footage on an OpenMediaVault NAS." Are you doing this straight from the Frigate config file? Or sending it to MQTT and doing it there
  2. "I then have Frigate set to send its event triggers and image to a MQTT topic with a the image cropped and resized to the face." - Does Frigate do this? Or are you doing it post MQTT? Are you using node-red by any chance?
  3. "I then have DoubleTake as an add on inside Home Assistant pick that up and MQTT that to DeepStack and CompreFace running on a Jetson Nano 4gig." - I am currently not using HA, I am familiar with DeepStack but will ahve to do a little research on the other two.
  4. "If both detectors come back with a score over 70% accurate, it sends actionable notifications to our phone with various buttons to unlock doors, turn on lights, or trigger the alarm." I like the double-check process, IN my scenario I will send to the phone as well as have Alexa announce who is being seen.
  5. "So far the setup is working very well. Just trying to get all these crappy Wyze cameras out and good PoE or 5ghz wireless cameras." - I have one Wyze left. Unplugged in a box.

Thanks so much.

— Reply to this email directly, view it on GitHubhttps://github.com/blakeblackshear/frigate/issues/659#issuecomment-1030879512, or unsubscribehttps://github.com/notifications/unsubscribe-auth/ACGXNJFDGC5ZLH2NAXOYIKLUZ2XFFANCNFSM4WY347AQ. You are receiving this because you were mentioned.Message ID: @.***>

jmorris644 commented 2 years ago

@LordNex Thanks for the great response.

I used to use Home Assistant. Found node-red and have not gone back. Admittedly, it is a totally different environment but It fits my skill sets better.

I have two spare Jetson nanos so I am going to throw another one up and put Deepstack on it along with node-red. There are deepstack node-red nodes available. It just makes all of the connections really easy. Low code. I will throw double-take on it too. I will do some more research on compre-face, but I do like the double-verification.

I do a lot verbally in the house. Not much touch-screen stuff going on. Currently I am using Alexa as it is handy and works but I have been researching and testing other non-cloud voice response systems. Eventually I will get rid of Alexa for the home automation.

I also have a Ad410 for my doorbell. I have node-red watching the API for messages and then I have Alexa announce when someone is at the door. I have the Ad410 working with Frigate now. I also have 7 REOlink 420-5MP POE cameras. Just like you mentioned. I have a 48 port Aruba managed POE switch at the core of everything.

I will let you know how everything goes over the next few days.

NicolaiOksen commented 2 years ago

Is there any update on when we might expect GPU support? I've tried for more than a month to get a Coral in Europe but no luck. What I do have is a GTX 1080 Ti just lying around. @blakeblackshear you are doing an amazing job and I'm sorry all I can do is ask "when is it ready".

blakeblackshear commented 2 years ago

I'm hoping that the work done in another branch to add support for the Jetson Nano will help lay the foundation to make this simpler. I don't know that for sure, but if so, it might make the next release.

bwmcin commented 2 years ago

I have been using yury-sannikov's frigate fork, gstreamer branch (not to use gstreamer, but to use yolov4 with tensorrt), and have had some success with that but still experimenting to verity. Am running on an AGX so don't know how this would pan out for x86. There is an error in the detection/init.py file that presents itself if the height and width are different. I would comment on his fork, but haven't figured out how to do that.

NickM-27 commented 2 years ago

@bwmcin You can comment on the PR here: https://github.com/blakeblackshear/frigate/pull/2548

That being said, what model are you using? The yolov4 model is a square like the coral model, so height and width should be the same:

model:
  path: /yolo4/yolov4-tiny-416.trt
  labelmap_path: /labelmap.txt
  width: 416
  height: 416
bwmcin commented 2 years ago

I am working on a custom model, 640x352, experimenting with some detection for my house camera based on person/head detection on crowdhuman. Thanks for the reference to the pr, will see if I can figure out how to comment there.

ghzgod commented 2 years ago

If we are not sure about the GPU support for detection, could we outsource the detection to Deepstack for an answer and return the answer to Frigate? Currently I run BlueIris in a VM and it is using a Deepstack docker powered by my Nvidia GPU for detection. The VM is massive overhead compared to Frigate and I would like to get away from it. However as we discussed previously, the USB detectors are sold out everywhere. Why not allow another detector service like Deepstack to do the heavy lifting (detection/image processing) for Frigate since it can use the GPU....for now?

Thank you!

NickM-27 commented 2 years ago

If we are not sure about the GPU support for detection, could we outsource the detection to Deepstack for an answer and return the answer to Frigate? Currently I run BlueIris in a VM and it is using a Deepstack docker powered by my Nvidia GPU for detection. The VM is massive overhead compared to Frigate and I would like to get away from it. However as we discussed previously, the USB detectors are sold out everywhere. Why not allow another detector service like Deepstack to do the heavy lifting (detection/image processing) for Frigate since it can use the GPU....for now?

Thank you!

Frigate is designed to work from the ground up doing live detection and logic. Even with a CPU it's ~100-200ms to get a detection result. Offloading to deepstack would be quite a bit higher (due to communication latency, also not sure how fast deepstack really is anyway)

2548 has laid the groundwork for using nvidia gpus

micogg commented 2 years ago

Thanks all who have contributed to this project!

Is there an update on the NVIDIA GPU as an option for detection? I see the note about the Jetson thread laying groundwork, but not sure if there has been tangible movement on the GPU detector.

I wish I had the talent to contribute to this! ha.

NickM-27 commented 2 years ago

Thanks all who have contributed to this project!

Is there an update on the NVIDIA GPU as an option for detection? I see the note about the Jetson thread laying groundwork, but not sure if there has been tangible movement on the GPU detector.

I wish I had the talent to contribute to this! ha.

A contributor has it working, see https://github.com/blakeblackshear/frigate/discussions/3016

NickM-27 commented 1 year ago

Closing this as support has been added in 0.12

strarsis commented 1 year ago

@NickM-27: This is so awesome! Could I theoretically use a Coral TPU on a Nvidia Jetson card? Nvidia Jetson generally supports Coral TPUs. But can frigate use both? Or use Coral for inference and the Jetson for decoding/encoding the video streams?

NickM-27 commented 1 year ago

The jetson platform is not supported, should be in the future. Currently tensorrt is only supported for amd64 platforms.

LordNex commented 1 year ago

The jetson platform is not supported, should be in the future. Currently tensorrt is only supported for amd64 platforms.

Can't wait for that. It would e perfect to have frigate a d a USB Corsl on a Jetson Nano, allow the Coral to handle the detections, and then the GPU on the nano for encoding and decoding of streams. Keeping my fingers crossed anyway. 🤞