motioneye-project / motioneyeos

A Video Surveillance OS For Single-board Computers
Other
7.87k stars 902 forks source link

FNC RTSP: replace gstreamer with v4l2multi_stream_mmal (potentially) #2111

Open jasaw opened 5 years ago

jasaw commented 5 years ago

This issue is for discussing how we go about replacing gstreamer based FNC RTSP implementation with a much more efficient and lower latency v4l2multi_stream_mmal that I hacked together.

Original discussion started here: https://github.com/ccrisan/motioneyeos/pull/2015

jasaw commented 5 years ago

@popoviciri Thank you for helping out. :-) My plan is to replace gstreamer with this multi_streamer, but we still have a lot of things on the road map (in no particular order):

Most definitely I have missed out a few things, but that's the gist of it.

I'm not across all the AWB issues that were introduced recently from various raspi components, so I'll let you handle that. :-)

jasaw commented 5 years ago

@popoviciri I have just added support for fragmented frames. When H264 encoded frame gets too big, the GPU fragments the frame, so the code needs to handle it. Unfortunately, I had to modify the libv4l2cpp (or called v4l2wrapper) to do 2 things:

What this means is, we now have to pull in my forked libv4l2cpp: https://github.com/jasaw/libv4l2cpp/tree/partial_write

popoviciri commented 5 years ago

@jasaw, sounds good to me. That's the beauty of Unlicense, fork and use! I'll give it a go and check the logs. In the meanwhile, I have modified my local streameyectl.py and streameye.sh to do what made sense to me. In my setup, I keep the plain old school raspimpjpeg as an option. This is how the frontend look like: image I know you mentioned above you wanna replace the existing raspimjpeg and gstreamer with the state of the art v4l2multi_stream_mmal but perhaps users, still want to use it "as it was". So the "streaming protocol" is still allowing an the user to switch between mjpeg raspimjpeg and rtsp v4l2rtspserver and the small toggle "enable the MJPEG stream" enables your full blown v4l2multi_stream_mmal app. If the toggle is off, then the v4l2rtspserver streams directly from /dev/video0 (or whatever is specified in raspimjpeg.conf. Obviously you're going to need buy-in from @ccrisan for whatever you choose, but until then, which one do you prefer? If you're happy with the above, I can quickly push these to the existing PR, so we can still use it. Cheers!

popoviciri commented 5 years ago

Having this implemented in thingos will require some creativity. I just had to:

Perhaps you can update your forks and re-add the submodule there, then I can use your forks directly. Also care to maintain those for the long run? The good news, is that motioneyeos builds with all these and streaming rtsp + mjpeg!

jasaw commented 5 years ago

@popoviciri On the web front-end, I don't think there's a need to have the "enabled MJPEG stream" because it is always enabled as the web front-end needs to display the camera feed.

After giving it some thought, it's probably better to keep raspimjpeg. Reason being the H264 hardware encoder on the RPi has a hard limit of 1080p resolution. To go higher resolution, users will have to switch to MJPEG, which means they can't use v4l2multi_stream_mmal because of various hardware restrictions on the GPU.

Basically, I think the web front-end only needs to have your MJPEG resolution, framerate, bitrate added to the RTSP mode.

As for the v4l2wrapper, I have updated the submodule in my v4l2tools fork mmal-multi-stream branch. v4l2rtspserver does not need my modified v4l2wrapper, so we can leave it as is.

About maintenance in the long run, I'll have to talk to mpromonet about merging my changes in and see how we go. If I can't merge upstream, I'll consider creating a new project for v4l2multi_stream_mmal. At this stage, v4l2wrapper is the only component (from mpromonet) that's required by v4l2multi_stream_mmal, and even that requires modifications. The other problem is the v4l2tools makefile is not cross-compile friendly. I remember having to heavily patch it last time in order to cross-compile.

popoviciri commented 5 years ago

Easy enough to remove stuff, than adding. So when you ready to implement this, I'll push the updates to the existing PR for review.

Regarding the build, I still need to link your v4l2wrapper into v4l2rtspserver, otherwise it will not build motioneyeos:

libv4l2wrapper.a(V4l2Capture.cpp.o): In function `V4l2Capture::V4l2Capture(V4l2Device*)':
V4l2Capture.cpp:(.text+0x194): undefined reference to `V4l2Access::V4l2Access(V4l2Device*)'
V4l2Capture.cpp:(.text+0x1b4): undefined reference to `vtable for V4l2Capture'
libv4l2wrapper.a(V4l2MmapDevice.cpp.o):(.rodata+0x160): undefined reference to `V4l2MmapDevice::startPartialWrite()'
libv4l2wrapper.a(V4l2MmapDevice.cpp.o):(.rodata+0x164): undefined reference to `V4l2MmapDevice::writePartialInternal(char*, unsigned int)'
libv4l2wrapper.a(V4l2MmapDevice.cpp.o):(.rodata+0x168): undefined reference to `V4l2MmapDevice::endPartialWrite()'
libv4l2wrapper.a(V4l2Output.cpp.o): In function `V4l2Output::V4l2Output(V4l2Device*)':
V4l2Output.cpp:(.text+0x194): undefined reference to `V4l2Access::V4l2Access(V4l2Device*)'
V4l2Output.cpp:(.text+0x1b4): undefined reference to `vtable for V4l2Output'
collect2: error: ld returned 1 exit status
make[3]: *** [CMakeFiles/v4l2rtspserver.dir/build.make:237: v4l2rtspserver] Error 1

Oh, I am aware of your huge patch for cross-compiling v4l2tools!! That would not be easy to maintain long term with a patch.

jasaw commented 5 years ago

@popoviciri I've updated the v4l2wrapper submodule in my forked v4l2rtspserver branch forked_wrapper.

With the unfriendly cross-compile issue, I'll need to spend some time on it to come up with a fix that I can push upstream, but not my priority at the moment.

jasaw commented 5 years ago

I've just tested audio with v4l2rtspserver and happy to report that it's working. I will leave it running for a few hours and see if there's any latency build up issue.

jasaw commented 5 years ago

I've been running v4l2multi_stream_mmal + v4l2rtspserver + audio for 8 hours straight and there's no latency build up. 👍

jasaw commented 5 years ago

I've also checked the GPU memory usage. Running 1280 x 720 @ 25 fps uses about 206MB of GPU memory, even the original Raspberry Pi with 512MB RAM can run this.

popoviciri commented 5 years ago

HI @jasaw, I actually resurrected the first two rpis I had and put them once again at some good use. For almost a week now, I am streaming 1296x972 full sensor of the v1 cameras with 25fps and 5Mbs. The total GPU mem use is 60M and the cpu idles around 41% of the 700MHz. MJPEG is 320x240 at 2fps which is perfect for motion detection. I also did not have ONE crash or any detect-able memory leak. This is awesome! Thanks again!! Need to find some usb mic to use the sound option though. Which mic you're using, by the way?

jasaw commented 5 years ago

@popoviciri Great to hear that our new RTSP FNC is stable.

As for mic, I've tested with the mic built into my Logitech webcam. Great audio quality but very expensive. I've also tested with a cheapo $2 USB mic, but I don't like this mic. The mic volume is too soft and there's no option to amplify it. I bought it out of curiosity. If you know any cheap and good USB mic, please share. :-)

I'm curious whether a USB conference mic is better. I found this on ebay with these specs: Specification: Polar patterns: Omnidirectional Model:iTalk-02 Shell material: Aluminum alloy + metal mesh cover Mic Dimension: D 70mm * H 15mm Sensitivity: -47 +/- 3 dB Frequency response range: 100-16000HZ SPL: 110db S/N ratio:> 58db Pick-up Distance: 2m up to 3m (6.5ft ~10ft) Interface Type: USB 2.0 Omni-directional microphone well picks up your voice from 360° direction usb_mic The advertised sensitivity is great, but I'm skeptical. My cheapo $2 USB mic has an advertised sensitivity of -67dB, but it's more like -10dB.