Closed Rakiah closed 5 years ago
Hi Beranger,
This project is just an experimentation around webRTC, you are welcome to submit pull-requests to improve it.
I never made performance test, and obviously the capacity of webrtc-streamer and browser is highly dependent of streams bandwidth/encoding and hardware used.
If you like some commercial support, it is better to look to janus-gateway, kurento, sourcey, flashphoner or wowza.
Best Regards,
Michel.
Hi. I tried many different methods for the same reason. I found the following solution.
This program use VP8 Encoder in webrtc-streamer but, VP8 encoders have poor performance without hardware so, The solution is to remove the VP8 Encoder and output it from your web browser to H.264.
[Environment] CPU : Intel Core i7-6700HQ CPU(8 core) OS : Windows 10 Video : full-hd(1920:1080) Camera : Samsung Techwin IP CCTV
[Solution]
this program need resolution change interface. I am using the following method
I applied the following code using mediaConstraints
int32_t RTSPVideoCapturer::Decoded(webrtc::VideoFrame& decodedImage)
{
.....
....
webrtc::VideoFrame frame = decodedImage;
if ((m_wanted_width > 0 && m_wanted_height > 0) && (frame.width() != m_wanted_width && frame.height() != m_wanted_height)) {
int stride_y = m_wanted_width;
int stride_uv = (m_wanted_width + 1) / 2;
rtc::scoped_refptr<webrtc::I420Buffer> scaled_buffer = webrtc::I420Buffer::Create(m_wanted_width, m_wanted_height, stride_y, stride_uv, stride_uv);
scaled_buffer->ScaleFrom(*decodedImage.video_frame_buffer()->ToI420());
frame = webrtc::VideoFrame(scaled_buffer,
decodedImage.timestamp(),
decodedImage.render_time_ms(),
webrtc::kVideoRotation_0);
}
//this->OnFrame(decodedImage, decodedImage.width(), decodedImage.height());
this->OnFrame(frame, frame.width(), frame.height());
.....
}
Thank you guys for your answers !
I managed to have 16 streams in 4 different browser instance on the same computer (cause i had to replicate 4 cameras streams through 4 instances) which should be the same as 16 streams in one browser instance right ?
My browser kind of went crazy topping to 90% CPU but i suppose i can definitely optimize it using the solution you provide, I also suspect that the solution you provide will help server side since the webrtc-streamer processus won't have to reencode h264 streams into VP8 right ? it will just act as a proxy if I understand correctly,
I will try and find where do i have to put the code you provided to me sacoku, thank you very much, will keep you updated !
also I had a quick question, by reading the code, it sounds like the RTSP urls passed in commandline args at the beginning are only used to generate the HTML page right ? If I use a custom html page and feed a different RTSP link, the webrtc streamer should work the same ?
Hi Beranger, There is a lots of different subject, I will try to answer to some :
Ah I get it, so if I understand correctly using h264 all the way would only help browser because h264 is easier to decode, but not server side that would decode/reencode no matter what right ? Thank you
PS: here is a photo of 36 streams working nicely in the browser with 960x480 resolution per camera
Beranger, I am not sure H264 is easier to decode, but it is using hardware implementation. 36 streams seems good, this is probably not far from network capacity Best Regards, Michel.
Hi. All Here's how to output to a web browser with H.264 I am currently testing and tuning the code below As mentioned above, the result of this code is that the CPU usage has been reduced from 18% to 4% If bypass the Web browser without re-encoding, performance is expected to improve
[webrtc-streamer.js]
WebRtcStreamer.prototype.onReceiveGetIceServers = function(iceServers, videourl, audiourl, options, stream) {
...
..
//ADD by sunghyun kim for VP8 -> H264
sessionDescription.sdp = setupVideoCodec(sessionDescription.sdp, "H264", 90000);
bind.pc.setLocalDescription(sessionDescription
....
..
The setupVideoCodec
function modifies the videocodec part of sdp(need to implement)
This function affects localdescription
Best regards.
Hi. I have question about performance. Currently, I am outputting video from chrome with h.264 with the code mentioned above. However, since [ip cctv] is already sending data in h.264, I want to forward without decoding-reencoding. if this is possible, could i ask for the solution. I think this will improve performance. otherwise, in windows, i will try to using directX.
Hi @sacoku I also thought that it should be possible, but if I understood Michel properly, he stated in a message above that the WebRTC SDK itself use I420 format as a pivot, which means that unless the webrtc sdk implements some sort of flag to specify that we want no processing it shouldn't be possible ? Is this correct @mpromonet ?
quoted from above
till now the WebRTC SDK use I420 as pivot format and thus event if the incoming RTSP strema use H264 codec and browser negociate H264, it is decoded an encoded, maybe with different quality, resolution. The change of resolution propose by sakocu is something I thought, but I hoper this could be done during the webrtc negociation (maybe not ?)
Hi Beranger & Kim,
Still now, the RTSP capturer inherit from cricket::VideoCapturer
the interface to notify a frame is
void VideoCapturer::OnFrame(const webrtc::VideoFrame& frame,
int orig_width,
int orig_height)
It use a VideoFrame, that doesnot allow to store SPS/PPS and this will be need to be forward using H264 format. From video_frame_buffer.h
// Base class for frame buffers of different types of pixel format and storage.
// The tag in type() indicates how the data is represented, and each type is
// implemented as a subclass. To access the pixel data, call the appropriate
// GetXXX() function, where XXX represents the type. There is also a function
// ToI420() that returns a frame buffer in I420 format, converting from the
// underlying representation if necessary. I420 is the most widely accepted
// format and serves as a fallback for video sinks that can only handle I420,
// e.g. the internal WebRTC software encoders. A special enum value 'kNative' is
// provided for external clients to implement their own frame buffer
// representations, e.g. as textures. The external client can produce such
// native frame buffers from custom video sources, and then cast it back to the
// correct subclass in custom video sinks. The purpose of this is to improve
// performance by providing an optimized path without intermediate conversions.
// Frame metadata such as rotation and timestamp are stored in
// webrtc::VideoFrame, and not here.
This is maybe possible using lower lever API and using the kNative
type, I think you should ask to WebRTC teams using https://bugs.chromium.org/p/webrtc/issues/list
Anyway forwarding H264 frames without re-encoding (like janus-gateway does) have some inconvenient:
IMHO if you like low latency/low CPU consumption try janus-gateway, if you like more browser compatility/bandwidth adaptibility try WebRTC SDK.
Best Regards, Michel
Hi, I was wondering if you conducted any performance tests with your system
For a project that I have I would have to support up to 16 cameras on a single browser and would like to know if it would be possible with this system, I tried running 16 containers with a single RTSP link and had my CPU reach 100% pretty fast on a I7-end machine
aswell, I wanted to know if you had any kind of commercial support that you could provide if we choose to use this solution (I'm also french, so it won't be too hard to do so even in freelancing)