Open Ewarren7 opened 3 years ago
I'm open to this idea, but will have to give it some thought. At the risk of jinxing things, this might not even be that messy, given that I already have some snapshot caching in the plugin. I could conceivably load the image that is sent to the plugin into that cache instead of pulling a fresh one.
I assume your other process would POST the image to the HTTP endpoint? Or would it need to handle passing that image some other way?
Keep in mind though that I cannot fully control when HomeKit requests a new snapshot, so it could still use an image it has cached.
Also, if implemented, I could probably include it in FTP Motion, but adding it to SMTP Motion is unlikely. Based on my limited experimentation, there are just too many ways thats cameras handle including images in emails (and some don't include images at all), so I'm not sure there is a clean way to handle it.
Awesome, I was thinking with support for a local file as still source and cache that it was hopefully not too far off.
Correct, SynoAI would post it in http request:
That is true re SMTP. For me FTP is less useful since I'd have to add ability for FTP to SynoAI code but I could see other use cases similar to mine that it would be useful for.
I’m assuming you’d be hitting the HTTP endpoint directly from SynoAI though, not going through SMTP, right?
I’m assuming you’d be hitting the HTTP endpoint directly from SynoAI though, not going through SMTP, right?
That is correct.
Great, once I get the next version out, I'll dig into this. I can't think of a good reason not to support this feature.
Awesome thanks!
@Ewarren7 I’m doing something similar with my cameras and setup a second fake camera that is used for special notifications. With the fake camera, I’m just reading a file on the local file system for the snapshot request, and updating the file and triggering the fake camera when my AI system detects something of interest.
@NorthernMan54 Would the proposed solution of POSTing an image to the HTTP endpoint also work for your needs?
@Sunoo personally I’m great with the current code base, I find that having a fake camera that only shows the latest AI image is a feature, as I can go back and see what was detected easily. And the real camera always shows the current image. I’m also sharing the single fake camera between multiple real camera’s.
I’m using motioneye and amazon recognition to detect people and animals, which for me works really well.
@Ewarren7 I’m doing something similar with my cameras and setup a second fake camera that is used for special notifications. With the fake camera, I’m just reading a file on the local file system for the snapshot request, and updating the file and triggering the fake camera when my AI system detects something of interest.
That's funny it occurred to me last night that this enhancement would be useful for this exact type of thing. Have a fake camera that covers multiple real cameras that I don't really need fully add to homebridge (I can VPN in and see them if needed) but would like to get AI motion alerted on with image.
For me though, the deepstack and synoAI are running on docker on 1 computer and homebridge is on a raspi so I would need to work out a way behind the scenes to get the image over to the pi or present it on a server the pi can hit for the still image. Given that I have to webhook the PI for the motion alert to trigger, having the image go with it is a very clean solution for my use case.
In my setup for orchestrating the various technology components I’m using node-red on the same host as homebridge so I can use the local file system to share files etc.
and for motion detection I’m using MotionEye, and Amazon recognition for AI.
In my setup for orchestrating the various technology components I’m using node-red on the same host as homebridge so I can use the local file system to share files etc.
and for motion detection I’m using MotionEye, and Amazon recognition for AI.
Thanks, node-red and MotionEye look interesting. I will have to check that out more
Hey, thanks for all the work on one of the best HB plugins.
I recently have been experimenting with running my synology cameras through deepstack to get "AI" based alerts. synoAI is a great repo for this if you use synology surveillance.
I can have it webhook back when it confirms AI based motion to trigger motion alert on ffmepg camera which is awesome. My problem arrises though that this is usually about 4-5 seconds from the original motion that tirggered my whole AI pipeline and by the time homebridge-camera-ffmpeg re requests a still image from the camera, the person or car has often already gone out of frame.
The synoAI supports posting with the processed image which would be perfect if I could use it as the still image used by homebridge-camera-ffmpeg in the motion alert.
Similarly if https://github.com/Sunoo/homebridge-smtp-motion could use the attached image from the email as the source of the still image, that would also solve this since SynoAI supports email notification.
I have been considering other-ways I might be able to shuttle the image to homebridge and use a local path as the still image source as described here https://github.com/Sunoo/homebridge-camera-ffmpeg/issues/84#issuecomment-301576750
Cheers