Open rsteckler opened 2 years ago
The onvif snapshot will only work if you set it up, like I described
The onvif snapshot will only work if you set it up, like I described
Sorry, I missed where it was described. Does that go into the Frigate config?
You set it up in go2rtc. Right now what you have is essentially taking the rtsp stream and pulling a key frame and encoding as a jpeg which uses CPU. Since doubletake hits this often, I'm just using onvif to request a snapshot from the camera
Hi everyone,
sorry if my post doesn't belong here, I don´t know where to ask. I use Frigate on my Unraid server, and I managed to use NodeRED and Frigate MQTT connexion to send snapshots of events to my Telegram bot (with a poor image definition).
I configured go2rtc and I am able to access the webUI.
I can reach the address : http://my_local_ip:1984/api/frame.mp4?src=CAM1 to get a snapshot of my High Resolution stream.
Now I an not able to use this snapshot in NodeRED to send this image instead of the original one (the one from mqtt events). Anyone could help me with this please ?
Here is the flow I use :
I can share the config if needed
Dang Nick, I'll bet that's exactly what I did and why it spiked my cpu usage! Just to clarify before I dig any further, this solution requires onvif to get the high resolution image without the cpu usage you described earlier?
Thanks again!
Dang Nick, I'll bet that's exactly what I did and why it spiked my cpu usage! Just to clarify before I dig any further, this solution requires onvif to get the high resolution image without the cpu usage you described earlier?
Thanks again!
If you work out how to do it, please post for the rest of us in the simple version …
yes, your camera needs to support onvif. It really is quite simple, you just need to open the Add
page in the go2rtc dashboard, then open the ONVIF
tab, wait for it to find your camera. Copy and paste that camera and change the user:password to be the actual user:password then press test
. It will show all streams and snapshots that the camera offers. Copy the snapshot one and create a new stream, then you can use that stream with frame.jpeg api endpoint
Unfortunately, my cameras don't support onvif. Two out of six do, but they're not the one's I'm interested in getting a high resolution snapshot from. :/
So I don't think that resolution will work for me, sadly.
Going back to the original suggestions, I wish I could just decode a single frame from the incident timeline in high resolution, and then maybe input a time offset offset if the high res/low res streams aren't synced up.
Definitely not a priority or anything though. Frigate is great for the NVR stuff. Just wish there was an easier way to get high quality snaps!
Going back to the original suggestions, I wish I could just decode a single frame from the incident timeline in high resolution, and then maybe input a time offset offset if the high res/low res streams aren't synced up.
this endpoint already exists in the upcoming 0.13 release, but it can only be done on recording segments that have been moved from the cache
Oh awesome! Thanks, I can definitely just wait for the 0.13 release. Not sure about how to move from the cache, but once the release is live I'll do some tinkering.
Thanks, you all are great!
Not sure about how to move from the cache, but once the release is live I'll do some tinkering.
This is not something you as the user control, basically it means the segment has to be done recording and being moved to the storage -- which means you'd need to wait 10 second at maximum before being able to call the API from when the recording took place
yes, your camera needs to support onvif. It really is quite simple, you just need to open the
Add
page in the go2rtc dashboard, then open theONVIF
tab, wait for it to find your camera. Copy and paste that camera and change the user:password to be the actual user:password then presstest
. It will show all streams and snapshots that the camera offers. Copy the snapshot one and create a new stream, then you can use that stream with frame.jpeg api endpoint
Where's the ONVIF tab? I have ANNKE c500's which support ONVIF
it is not available in go2rtc 1.2.0
Ah that'll be it.. ok maybe I'll fire up 0.13 again. It kept crashing after a few minutes when I tried last week on unraid but think update has shipped since.
Where's the ONVIF tab? I have ANNKE c500's which support ONVIF
You can install another version of go2rtc with Frigate 0.12
This is my go2rtc dashboard, running with Frigate 0.12 on Unraid server
Unfortunatly my Reolink cameras don't support ONVIF :(
So I've manually upgraded go2rtc. Then found I had to enable ONVIF on all my cameras as its disabled by default..
Have ONVIF feeds detected in go2rtc (have to manually add as on different subnet) but it wont let me save new config in go2rtc it says 'open /dev/shm/go2rtc.yaml: permission denied '
I'm on unraid how can I fix the permission?
you don't fix the permission, you should not be overwriting that file, you should be adding the stream to the main frigate config file
Ok still looking for the full context. How you do trigger the snapshot?
Do you change the feed for the snapshot like you can have different feeds for detect & record? Should it product a snapshot if I click the link to frame.jpeg
At present i get this: Not Found
The requested URL was not found on the server. If you entered the URL manually please check your spelling and try again.
you hit the links
and then use the frame.jpg
link. For example doubletake has a config where you can change the default link for the latest
type and you can set it to use that link for latest snapshots.
Same for any other automation or logic that uses a link
If I put that frame.jpeg link into the my web browser should it load a snapshot to verify the link is working.. I have tested ONVIF is working via another app 'IPcams" but need to double check go2rtc is implementing correctly.
If I have the onvif link from go2rtc ... where do I add it? -> onvif://admin:PW@192.168.68.50:8000?subtype=000&snapshot
I dont unterstand this part: "Copy the snapshot one and create a new stream, then you can use that stream with frame.jpeg api endpoint"
you add it to the go2rtc section of the frigate config
go2rtc:
streams:
front_cam: rtsp://...
front_cam_snapshot: onvif://...
Thanks! I did this but I am still unsure how to finally connect it, that it does the snapshots via onvif .. this is my firgate config:
# https://www.youtube.com/watch?v=gQdtGLRzKRI&t=550s Einrichtung!!!
# https://github.com/blakeblackshear/frigate/issues/2199
mqtt:
enabled: True
host: 192.168.68.59
#port: 1883
user: mqtt-user
password: Poelich123
topic_prefix: frigate
client_id: frigate
go2rtc:
streams:
username: "admin"
password: "PW"
Cam_Outdoor_Side_snapshot_FG: onvif://admin:PW@192.168.68.50:8000?subtype=000&snapshot
#Cam_Outdoor_Side_FG_main: rtsp://admin:PW@192.168.68.50:554/h264Preview_01_main
#Cam_Outdoor_Side_FG_sub: rtsp://admin:PW@192.168.68.50:554/h264Preview_01_sub
ffmpeg:
hwaccel_args: preset-vaapi
# Optional: logger verbosity settings
logger:
# Optional: Default log verbosity (default: shown below)
default: info
# Optional: Component specific logger overrides
logs:
frigate.event: debug
birdseye:
enabled: True
mode: continuous
restream: False
#width: 640
#height: 480
quality: 10
detectors:
coral_pci:
type: edgetpu
device: pci
cameras:
Cam_Outdoor_Side_FG:
ffmpeg:
inputs:
- path: rtsp://admin:PW@192.168.68.50:554/h264Preview_01_main
roles:
- record
- rtmp
- path: rtsp://admin:PW@192.168.68.50:554/h264Preview_01_sub
roles:
- detect
detect:
width: 640
height: 480
fps: 10
snapshots:
enabled: True
mqtt:
crop: True
timestamp: False
bounding_box: False
height: 500
quality: 100
record:
enabled: True
retain:
days: 7
mode: motion
events:
retain:
default: 14
mode: active_objects
objects:
person: 14
dog: 2
car: 14
Cam_Hof_FG:
ffmpeg:
inputs:
- path: rtsp://admin:PW@192.168.68.51:554/h264Preview_01_main
roles:
- record
- rtmp
- path: rtsp://admin:PW@192.168.68.51:554/h264Preview_01_sub
roles:
- detect
detect:
width: 640
height: 480
fps: 10
mqtt:
crop: True
timestamp: False
bounding_box: False
height: 500
quality: 100
snapshots:
enabled: True
record:
enabled: True
retain:
days: 7
mode: motion
events:
retain:
default: 14
mode: active_objects
objects:
person: 14
dog: 2
car: 14
Cam_Seite_Band_FG:
ffmpeg:
inputs:
- path: rtsp://admin:PW@192.168.68.54:554/h264Preview_01_main
roles:
- record
- rtmp
- path: rtsp://admin:PW@192.168.68.54:554/h264Preview_01_sub
roles:
- detect
detect:
width: 640
height: 480
fps: 10
snapshots:
enabled: True
record:
enabled: True
retain:
days: 7
mode: motion
events:
retain:
default: 14
mode: active_objects
objects:
person: 14
dog: 2
car: 14
Cam_Front_Band_FG:
ffmpeg:
inputs:
- path: rtsp://admin:PW@192.168.68.69:554/h264Preview_01_main
roles:
- record
- rtmp
- path: rtsp://admin:PW@192.168.68.69:554/h264Preview_01_sub
roles:
- detect
detect:
width: 640
height: 480
fps: 10
snapshots:
enabled: True
mqtt:
crop: True
timestamp: False
bounding_box: False
height: 500
quality: 100
record:
enabled: True
retain:
days: 7
mode: motion
events:
retain:
default: 14
mode: active_objects
objects:
person: 14
dog: 2
car: 14
Cam_Gastank_FG:
ffmpeg:
inputs:
- path: rtsp://admin:PW@192.168.68.56:554/h264Preview_01_main
roles:
- record
- rtmp
- path: rtsp://admin:PW@192.168.68.56:554/h264Preview_01_sub
roles:
- detect
detect:
width: 640
height: 480
fps: 10
snapshots:
enabled: True
mqtt:
crop: True
timestamp: False
bounding_box: False
height: 500
quality: 100
record:
enabled: True
retain:
days: 7
mode: motion
events:
retain:
default: 14
mode: active_objects
objects:
person: 14
dog: 2
car: 14
Cam_Outdoor_Haupteingang_FG:
ffmpeg:
inputs:
- path: rtsp://admin:PW@192.168.68.57:554/h264Preview_01_main
roles:
- record
- rtmp
- path: rtsp://admin:PW@192.168.68.57:554/h264Preview_01_sub
roles:
- detect
detect:
width: 640
height: 480
fps: 10
snapshots:
enabled: True
mqtt:
crop: True
timestamp: False
bounding_box: False
height: 500
quality: 100
record:
enabled: True
retain:
days: 7
mode: motion
events:
retain:
default: 14
mode: active_objects
objects:
person: 14
dog: 2
car: 14
Cam4K_Tor_FG:
ffmpeg:
inputs:
- path: rtsp://admin:PW@192.168.68.53:554/h264Preview_01_main
roles:
- record
- rtmp
- path: rtsp://admin:PW@192.168.68.53:554/h264Preview_01_sub
roles:
- detect
detect:
width: 640
height: 360
fps: 10
snapshots:
enabled: True
mqtt:
crop: True
timestamp: False
bounding_box: False
height: 500
quality: 100
record:
enabled: True
retain:
days: 7
mode: motion
events:
retain:
default: 14
mode: active_objects
objects:
person: 14
dog: 2
car: 14
Cam4K_Regal_FG:
ffmpeg:
inputs:
- path: rtsp://admin:PW@192.168.68.63:554/h264Preview_01_main
roles:
- record
- rtmp
- path: rtsp://admin:PW@192.168.68.63:554/h264Preview_01_sub
roles:
- detect
detect:
width: 640
height: 360
fps: 10
snapshots:
enabled: True
mqtt:
crop: True
timestamp: False
bounding_box: False
height: 500
quality: 100
record:
enabled: True
retain:
days: 7
mode: motion
events:
retain:
default: 14
mode: active_objects
objects:
person: 14
dog: 2
car: 14
I think there is some confusion, you can not configure frigate to do this itself (otherwise this issue would be closed). You CAN configure doubletake to use the snapshot url (which is one of the main usecases for wanting higher resolution snapshots)
Ah! I see. Could you share your double take config? So I could see how to integrate the snapshot?
here is an example:
frigate:
...
events:
front_cam:
image:
height: 1440
latest: http://192.168.50.106:1984/api/frame.jpeg?src=front_cam_snapshot
Thanks! Like this?
# Double Take
# Learn more at https://github.com/jakowenko/double-take/#configuratio
# ui: ->:3000
go2rtc:
streams:
Cam_Outdoor_Side_Snapshot_FG: onvif://admin:PW@192.168.68.50:8000?subtype=000&snapshot
mqtt:
host: 192.168.68.59:1883 # mqtt host
user: mqtt-user
password: !secret mqtt_password
topics:
# mqtt topic for frigate message subscription
frigate: frigate/events
# mqtt topic for home assistant discovery subscription
homeassistant: homeassistant
# mqtt topic where matches are published by name
matches: double-take/matches
# mqtt topic where matches are published by camera name
cameras: double-take/cameras
frigate:
url: http://ccab4aaf-frigate:5000
#url: http://192.168.68.59:5000
image:
height: 500
attempts:
latest: 15
snapshot: 0
events:
Cam_Outdoor_Side_DT:
image:
height: 1440
latest: http://192.168.50.106:1984/api/frame.jpeg?src=Cam_Outdoor_Side_Snapshot_FG
detectors:
compreface:
url: http://homeassistant.local:8000
key: !secret compreface_key
timeout: 15
# minimum required confidence that a recognized face is actually a face
# value is between 0.0 and 1.0
det_prob_threshold: 0.8
# require opencv to find a face before processing with detector
opencv_face_required: false
yes that should be fine
Thanks! I think there is a little error. after latest I used your ip address and port from your example. Is this my mqtt ip address and port or the one from the camera?
it's neither, it is the ip of frigate and 1984 is the port for go2rtc api (which needs to be enabled on docker side as well)
Ah I see. I did not install docker but did everything via home assistant addons. My frigate url is for example url: http://ccab4aaf-frigate:5000 Do you have a clue what to do in this case?
enable the 1984 port in the addon config
Sorry, I did not get it to work.
In the home assistant go2rtc addon config there is no way to enable the 1984 port.
There is, you just edit the ports and where it has 1984 you enter that same number in the text box
where to find the ip of frigate if I dont run it in a container but as home assitant addon?
It is just the IP of the host running home assistant
you might as well just use a higher resolution stream for detect. Whatever resolution you set for
detect: width: height:
will resize the detect stream to that. So for example my doorbell camera I just have my main stream (2560x1920) and set it to quarter size which is plenty for doubletake while also not being too much work
In this case, what will be the source image for the snapshot and extracted thumbnail - the original stream's frame or the downsampled one?
what scaling mode/method/algorithm (sorry not too sure the correct word to use here) does frigate use? maybe you can allow the user to choose so that we can figure out which is more suited for our use cases, like for those that prefer quality over resource/power consumption for example. for some users tweaking the scaler might be a good enough compromise
you might as well just use a higher resolution stream for detect. Whatever resolution you set for
detect: width: height:
will resize the detect stream to that. So for example my doorbell camera I just have my main stream (2560x1920) and set it to quarter size which is plenty for doubletake while also not being too much work
In this case, what will be the source image for the snapshot and extracted thumbnail - the original stream's frame or the downsampled one?
it would be a snapshot of size detect.width
xdetect.height
what scaling mode/method/algorithm (sorry not too sure the correct word to use here) does frigate use? maybe you can allow the user to choose so that we can figure out which is more suited for our use cases, like for those that prefer quality over resource/power consumption for example. for some users tweaking the scaler might be a good enough compromise
this is all built in to ffmpeg, and depends on the hardware decoder that is used
I ended up just increasing the bitrate of the detection stream. Then making sure snapshots are also at 100 quality. Works great with existing setup.
you might as well just use a higher resolution stream for detect. Whatever resolution you set for
detect: width: height:
will resize the detect stream to that. So for example my doorbell camera I just have my main stream (2560x1920) and set it to quarter size which is plenty for doubletake while also not being too much work
In this case, what will be the source image for the snapshot and extracted thumbnail - the original stream's frame or the downsampled one?
it would be a snapshot of size
detect.width
xdetect.height
Would this be hard to implement (keep & save the original frame as a snapshot)? I'm not familiar with the codebase yet, but this sounds like one of the viable solutions.
Would this be hard to implement (keep & save the original frame as a snapshot)? I'm not familiar with the codebase yet, but this sounds like one of the viable solutions.
It couldn't be done from the same ffmpeg process, it would need to be a separate process which has the issues raised above
I'm also interested in a variation of this solution.
I've built a flow in nodered that's activated by frigate's detection of a car to then pull a snapshot directly from the camera and post it to codeproject.ai to read the plate. Would be amazing for frigate to have an integration so the snapshots and clips would be searchable by the found plate message returned.
I'm also interested in a variation of this solution.
I've built a flow in nodered that's activated by frigate's detection of a car to then pull a snapshot directly from the camera and post it to codeproject.ai to read the plate. Would be amazing for frigate to have an integration so the snapshots and clips would be searchable by the found plate message returned.
this can already be done, I have a similar setup in HA. Based on the plate the sub label is set on the car event
I'm also interested in a variation of this solution. I've built a flow in nodered that's activated by frigate's detection of a car to then pull a snapshot directly from the camera and post it to codeproject.ai to read the plate. Would be amazing for frigate to have an integration so the snapshots and clips would be searchable by the found plate message returned.
this can already be done, I have a similar setup in HA. Based on the plate the sub label is set on the car event
Well now I don't feel so special.... mind pointing me toward some docs for that implementation?
https://docs.frigate.video/integrations/api#post-apieventsidsub_label is the relevant API
I am planning on writing up a guide at some point once I get some more testing and refining in for it
Nice!
Found this feature request and I would love a Frigate way of handling this. During setup I just tuned down most of my sub-streams to lower quality and fps, and my snapshots are pretty bad. I might just put the detect on the primary stream for now and see if my hardware cares. However I would love to add a snapshot role to the record stream and get a better quality even if ti isn't the SAME frame. Even better if it is run through the coral device and labels added around it.
Hi @NickM-27, I am trying to follow the discussion around onvif but I can't seem to get it to work.
I would appreciate your expert eye on this to let me know what I'm doing wrong. Running everything in dockers on Unraid 192.168.1.52 is the IP of my cameras on an NVR 192.168.1.90 is the IP of my Unraid server
# Double Take
# Learn more at https://github.com/jakowenko/double-take/#configuration
go2rtc:
streams:
snapshot_test: onvif://<user>:<pass>@192.168.1.52:80?subtype=000&snapshot
mqtt:
host: 192.168.1.90:1883
username: <mqtt_user>
password: <mqtt_pass>
frigate:
url: http://192.168.1.90:30058
update_sub_labels: true
labels:
- person
events:
snapshot_test:
image:
height: 1440
latest: http://192.168.1.90:1984/api/frame.jpeg?src=snapshot_test
topics:
# mqtt topic for frigate message subscription
frigate: frigate/events
# mqtt topic for home assistant discovery subscription
homeassistant: homeassistant
# mqtt topic where matches are published by name
matches: double-take/matches
# mqtt topic where matches are published by camera name
cameras: double-take/cameras
detectors:
# compreface:
# url: http://192.168.1.90:15000
deepstack:
url: http://192.168.1.90:5000
Did you put the go2rtc config in the doubletake config file? That's supposed to be part of the frigate config.
Describe what you are trying to accomplish and why in non technical terms I want to be able to run detection on a lower, more efficient stream, but then use a higher resolution stream to pass the MQTT image and for saving the snapshot. This will let me stay efficient on detection, but get high-res screens via MQTT to other apps like HA, DoubleTake, and custom scripts.
Describe the solution you'd like A "snapshots" role in the camera section would allow separation between the detection stream (which currently is used for snapshots) and the stream used to capture snapshots. Further, an "mqtt_snapshot" role could be specified for the images passed over MQTT on detection events.
Describe alternatives you've considered Running the detection on the high res stream, but that's inefficient with many cameras.