motioneye-project / motioneye

A web frontend for the motion daemon.
GNU General Public License v3.0
3.95k stars 650 forks source link

Secondary feed from Raspberry Pi camera #1407

Closed dickh768 closed 5 years ago

dickh768 commented 5 years ago

I have been experimenting recently with a Rpi camera and have established that I can stream h264 HD video from the camera and simultaneously produce a lower resolution stream which I would like to use for motion detection. I believe I should be able to encode this secondary stream with MJPEG and loop it back through V4L2 and then feed that into Motion, but it seems very wasteful of CPU time to force Motion to then decode the stream back to raw video to perform the motion detection. Is there a method to allow the raw video to be coupled straight into Motion without encoding (or with minimal coding)?

I am using a named pipe to stream the h264 to ffmpeg for the primary stream, it would seem plausible that one could also use a named pipe to send the raw video to Motion.

Are there any other options?

jasaw commented 5 years ago

@dickh768 Just out of curiosity, how are you getting simultaneous streams from the RPi camera? I'm looking at doing the same thing.

With your secondary stream for motion detection purpose, you could output the "raw" (actually it is YUV format) to a v4l2 interface, and configure motion to use that v4l2 device for motion detection. As for the primary high res stream, maybe output it to another v4l2 interface? I'm not sure whether motion can do pass-through recording from a v4l2 interface. Alternatively, you could use extpipe option from motion that runs ffmpeg to record from the v4l2 interface.

dickh768 commented 5 years ago

@jasaw I was also wondering whether YUV would indeed work (although I could find no reference in the docs). Before trying, I wanted to get a clean build on my RPi and install Motioneye with a separate web server before trying that out - it may be a few days before I get there...

As for the multiple outputs, I have been following the recipes for the mmalobj API from the official picamera documentation https://picamera.readthedocs.io/en/release-1.13/. This gives you a lower level access to various components to construct pipelines between the camera sensor and the outputs. The main useful features for me are:

As proof of concept, I managed to record two simultaneous hi-res/lo-res copies to two h264 files without too much difficulty. I also managed to feed the hi res stream to ffmpeg for streaming using a named pipe. My next step was to find a good motion detection solution, and Motioneye seemed to fit the bill. Clearly motioneye will only allow viewing and recording from the low-res stream, however I am hoping to be able to use the script triggers to generate hi-res clips. To do this I am using ffmpeg to generate HLS streaming rather than RTSP. This breaks the video down into short segments, stores them for a period and creates a rolling playlist for use by the viewer. Logically it should be possible to use a script to re-assemble several segments before/after a motion trigger.