HclX / WyzeHacks

Hacks I discovered allowing Wyze camera owners to do customizations
MIT License
789 stars 75 forks source link

Option to create a snapshot on interval #49

Open jzhvymetal opened 3 years ago

jzhvymetal commented 3 years ago

I want to use ZoneMinder to monitor for motion. Is there any way to create an option that will create a JPEG snapshot with the same filename at given interval and overwrite each time. This would allow zoneminder to monitor motion file source.

https://wiki.zoneminder.com/How_to_use_ZoneMinder_with_cameras_it_may_not_directly_support

beaverdude commented 3 years ago

isn't it easier to install the official RTSP fw and use ZM's ffmpeg source?

jzhvymetal commented 3 years ago

isn't it easier to install the official RTSP fw and use ZM's ffmpeg source?

Maybe but that would require RTSP server that would require more resources from the camera and network bandwidth to stream RTSP. Since the NFS is already being utilized creating a snapshot image should not be that resource intensive. Also the RTSP FW never gets any updates.

beaverdude commented 3 years ago

actually it is possible to grap a still image directly from a sensor: https://honeylab.hatenablog.jp/entry/2020/06/01/024353 But it is in NV12 format and requires conversion. Maybe this would be helpful: https://github.com/andyongg/yuv2image

jzhvymetal commented 3 years ago

actually it is possible to grap a still image directly from a sensor: https://honeylab.hatenablog.jp/entry/2020/06/01/024353 But it is in NV12 format and requires conversion. Maybe this would be helpful: https://github.com/andyongg/yuv2image

I read the page with google translate but there should already be native way inside the camera because it creates a JPG in the alarm directory on each motion event.

jzhvymetal commented 3 years ago

actually it is possible to grap a still image directly from a sensor: https://honeylab.hatenablog.jp/entry/2020/06/01/024353 But it is in NV12 format and requires conversion. Maybe this would be helpful: https://github.com/andyongg/yuv2image

For the life of me I could not get yu2image to compile because of opencv requirements. I did find an alternative and can get 5 fps. Not sure if is truly 5fps but it is updating in zoneminder.

  1. Copy to your WyzeCams NFS the following file https://github.com/EliasKotlyar/Xiaomi-Dafang-Hacks/blob/master/firmware_mod/bin/avconv

  2. Run the following bash script

    !/bin/sh

    cp /mnt/WyzeCams/avconv /tmp/avconv while : do impdbg --save_pic /tmp/output.nv12 --pic_type NV12 & /tmp/avconv -loglevel quiet -y -f rawvideo -pixel_format nv12 -s 1920x1080 -i /tmp/output.nv12 -vf fps=1 /media/mmc/output.jpg

    sleep 0.1 done

Semag commented 3 years ago

@jzhvymetal this is really interesting..I've been monitoring the issues here due to the break on the latest firmware and saw your update.

I've been contemplating whether it would be possible to use the on-camera smarts of the wyze to reduce the amount of network traffic and prevent a constant recording stream

I'm basically thinking about whether we can send a hook to software (like zoneminder) when new video is recorded because the camera has determined there is motion. That would allow for off camera local person detection etc. Is that what you're looking at?

gtxaspec commented 3 years ago

nice! off topic...have you tried to get the RTSP server from dafang hacks working on this? I couldn't!

jzhvymetal commented 3 years ago

nice! off topic...have you tried to get the RTSP server from dafang hacks working on this? I couldn't!

RTSP is not so easy because the Wyze program on the camera locks the V4l device of the camera so it can not be shared. If you kill the Wyze program it will respawn or reboot the camera. Not sure why no one ever tried compiling V4l loopback so the camera device could be share to another program. That way Avconv, ffmpeg or the RTSP server could share the same device. If there was access to the the V4l device Avconv and ffmpeg has the ability to stream directly to RTSP so nothing else would be required.

Semag commented 3 years ago

@jzhvymetal / @gtxaspec - just to update this thread, i'm really close to having what i'd like.

The Wyze cam stores self-created snapshots (jpg) in the alarm directory when motion is detected.

I've installed a Deepstack Docker and integrated Deepstack with Home Assistant for Person Detection (https://siytek.com/home-assistant-person-detection/ )

Now the part i'm working on / struggling with is getting a "folder_watcher" integration on home assistant to watch the remote "alarm" nfs folder. As an alternative, i've been thinking this afternoon of trying to mount one of my Home Assistant Samba folders in the NFS server so that the alarm jpgs go to the home assistant server instead.

If I can get the folder_watcher to fire an automation when the jpg is created, then I can pass it to Deepstack and run person detection on the image. This allows me to hit a few key points I believe:

In my mind, i'm thinking that this both keeps extraneous traffic off my wireless band (with multiple cameras), cuts down on false events, and also keeps a lot of the processing local

endertable commented 3 years ago

nice! off topic...have you tried to get the RTSP server from dafang hacks working on this? I couldn't!

RTSP is not so easy because the Wyze program on the camera locks the V4l device of the camera so it can not be shared. If you kill the Wyze program it will respawn or reboot the camera. Not sure why no one ever tried compiling V4l loopback so the camera device could be share to another program. That way Avconv, ffmpeg or the RTSP server could share the same device. If there was access to the the V4l device Avconv and ffmpeg has the ability to stream directly to RTSP so nothing else would be required.

Do you know of anyone that has been able to create a new /dev/video? for another program to use. I am not really interested in RTSP as it is very unreliable and quirky and delayed. I would more like to stream mjpeg even though it is more intense and heavier on bandwidth, it is usually more real time.

Semag commented 3 years ago

Just to give an update to where i'm at here, I currently have the following set up:

This all seems to work pretty smoothly thus far, and allows me to turn off all notifications on the wyze cams (and thus remove almost all false alarms).

endertable commented 3 years ago

@Semag Hi this sounds great! Good implementation with very good reasons. I understand all of it except for the “calls a deepstack person detection docker container“. Is this some kind of program suite? Can you elaborate, sounds like something I’d love to try. :). Thanks

Semag commented 3 years ago

@endertable -

So i was looking at a way to do person detection locally, kind of as a fun project. I found this (1100 comment thread!!) over at home assistant:

https://community.home-assistant.io/t/face-and-person-detection-with-deepstack-local-and-free/92041

deepstack is a docker container that has object detection built in, it was actually pretty simple to get running, and it is a separate project from home assistant. While I initially tried to get it working within home assistant, i had problems copying files and moving them around etc, so what I ended up doing was building a python script that monitors the folders on my NFS machine.

So, the NFS server is running, it monitors the "alarm" folders for new files, and each time it gets a new file, it pops it over to the deepstack container to run "object detection."

https://deepstack.cc/ <--- you can go there for some docs and some initial information on the docker and how to call it via a quick python script.

I don't have any of my scripts on Github, but i could probably put them up if someone is interested. I did run into a series of false positives for the first time the day before yesteray. It was very windy outside and so my camera was detecting motion every 5 minutes. It just so happened that my trash can and a shadow looked enough like a "person" that it would fire my "person detect" every 5 minutes haha!


This is pretty rudimentary and I know there are issues:

Anyway, those are just some thoughts off the top of my head.

delovelady commented 3 years ago

Now the part i'm working on / struggling with is getting a "folder_watcher" integration on home assistant to watch the remote "alarm" nfs folder. As an alternative, i've been thinking this afternoon of trying to mount one of my Home Assistant Samba folders in the NFS server so that the alarm jpgs go to the home assistant server instead.

What platform runs your NFS? If it's unix-like then the package inotify-tools will get you what you need I use inotify-wait and the inotify development set for C programs extensively on ubuntu. Works great! If windoze or Freenas, you're out of luck, though. Unlike *ix, hose OS don't have the option of creating event conditions when files change. (Such signal stuff has to be built deep into the filesystem logic and accepted by the kernel.)

delovelady commented 3 years ago

I realize this is six months old, but I didn't see a response that addressed the question dead-on about creating snapshots on an interval. Actually, I've been doing some toying with this, and it turns out that it's fairly simple (as long as your interval is in whole seconds, and no less than 3). If you use the timelapse feature, a record.h264 video will be created immediately and populated with each picture on the fly. ffmpeg can successfully extract each frame into your choice of directories (even though the video's not yet complete) a la:

(In these examples I assume the command is being run in the same directory as the timelapse. Adjust accordingly.)

ffmpeg -i record.h264 image-%04d.jpg (Will create image-0001.jpg, image-0002.jpg, image-0003.jpg, et cetera)

This can also be made a bit smarter, telling ffmpeg to extract certain frames. Here's one way (extracting frames 4 through 8) : ffmpeg -i record.h264 -vf select='between(n\,4\,8)' -vsync 0 img-%02d.jpg (will create img-01.jpg through img-05.jpg)

Hope that helps.