Open ministryofsillywalks opened 3 years ago
One of my goals with this project is to extend the webhook to allow you to post the actual image file. I hadn't thought about posting an image URL but that could work too.
The nice thing about integrating with Blue Iris is that it can manage the file storage for you by deleting old files automatically. If I open up the webhook to posting image files, then I'll need build out the file management too so that the server doesn't fill up with gigs of image files. None of this is novel or hard but it would take some time to stand up.
What does a web request from Shinobi look like?
Shinobis API is quite extensive. So far that I haven't even begun to scratch the surface ;)
https://shinobi.video/docs/api
However Shinobi can produce a snaphot jpeg url where a camera stream can be seen as a jpeg updating each X seconds.
http://xxx.xxx.xxx.xxx/[API KEY]/jpeg/[GROUP KEY]/[MONITOR ID]/s.jpg
Shinobi can also call a webhook once motion is detected.
So my proposed workflow was
This flow would not create gigs of image files as they are not stored on the drive but are dynamic in the url. So lastwatch should (maybe) only use the image for detecting and can discard it afterwards. This is me a completely non dev/coding person talking so this might not be possible :)
The file would have to be accessible by the webserver in order to run it through the AI. The entire app is built around the assumption that the image file is accessible by the web server. Note that this could include shared folders.. the images don't have to physically reside on the web server, they just have to be accessible through the file system.
Either the image data would need to be posted to the webhook, or the image URL would be posted. I see shinobi provides an API to get images but where is the API that triggers on motion events? I could expose a new webhook endpoint for Shinobi but I have no idea what format it sends. I don't see anything in the API docs about pushing motion events.
The documentation is not the best. ;) I currently use node-red to receive webhooks from shinobi. On a motion detection shinobi can call a webhook (GET or PUT I think) and you can add varaibles. So for example when a monitor detects motion I call: http://10.10.10.3:1880/shinobi/{{MONITOR_ID}} This automatically inserts the monitor ID and from there I have node-red grab the jpeg image from the snapshot stream and then pass this on to doods or tensorflow or deepstack Here you can find some of the variables. https://gitlab.com/Shinobi-Systems/Shinobi/-/blob/master/libs/events/utils.js Afaik Shinobi does not allow to send a jpeg to a webhook. But maybe there is a simpler solution and im just overengineering the whole thing ;)
I'm currently experimenting in using Shinobi as my NVR software. However shinobi unlike BI or Zoneminder doesn't have an option to save jpegs of events in a folder (unless doing this with a user script) However it can call webhooks and has a snapshot URL for each camera. So my thinking was an alternate workflow could be to alert last-watch-ai via a webhook and have last-watch-ai then read the image from a snapshot url. I know motioneye also supports webhooks, so that would be another software this solution could work with.