Open ariselseng opened 8 years ago
Not quite understand the whole idea with 'representation'. Can you provide some explanation?
Its the dash equivalent to hls_variant, giving us adaptive streaming.
If it's like http://nginx-rtmp.blogspot.ru/2013/07/hls-variant-playlist.html and working then it's very interesting.
Agree this could be merged here
Suggest to convert option dash_representation
to dash_variant
to make it correspond to hls options.
Hello, is the function already integrated?
Not yet.
Hi folks! ( @sergey-dryabzhinsky and @cowai )
GitHub is not sending me email notifications unfortunately so I missed these notes. There are some optimizations needed to make this more efficient at building out the DASH representations, and I'd need to merge in recent changes (this PR is a bit old). I'd be happy to change "dash_representation" to "dash_variant" if that's more intuitive to folks as well.
I know @stephenbasile had some thoughts on the optimization front, I'll need to re-connect with him to make sure those are captured.
@joshmarshall Thank you for response!
+1 for this! Any movement so far? Thanks
This would be awesome, +1.
:+1: @joshmarshall: Would be great, if you can fix nested DASH playlist before
+1
I worked on the adaptation of https://github.com/joshmarshall/nginx-rtmp-module/pull/1 for the (almost) current dev branch. I need that capability for my project.
My curent work is here (it's messy, not compilable, but probably not complicated to correct. I need some free time to do it myself) : https://github.com/MoffTigriss/nginx-rtmp-module/tree/work-dash-5
If someone want to redo the full work, it's not complicated : the most complex task is converting the dash template to a most grainy one. And using the dash_template.h (wich is a really good idea).
I think there is an big limiting factor with all the "DASH representation", "HLS Variant", and "RTMP dynamic streaming". (I think you could add this as a limitation to "MP4-SVC" as well.) The problem is (assuming I understand documentation) is that to do it correctly, the different spatial versions of the same encoding all need to have the same I-P-B frame sequence and for the segmented formats that they are all segmented the same. (Maybe the P and B frame types do not need to match, but the I-frames do) The only way I know of doing this is to lock down the frame type decisions to a static sequence, with a static segmentation size. This really sucks because it disables a lot of optimizations for both size and quality in allowing the encoder to decide frame type according to content (for instance, inserting key frames on scene changes). But without a locked down frametype sequence, there is no means of being sure that the different spatial variants produced by the encoder will all have keyframes and segmentation in the same places.
I assume most people using this are encoding using x264, and what is needed is for x264 to have a way to provide a master-slave IPC between different encoding processes so that the frame-type decisions of the master encoder processes can be copied by slave encoder processes.
I was worried about the same issues ten years ago when I was using multibitrate windows media streaming. But since then bandwidth got dirt cheap.
Sure, using fixed GOP size is suboptimal in terms of bitrate allocation (and for Apple compliant HLS fixed frame count per segment is to be expected). Suboptimal doesn't mean it sucks badly, and it's easy to force I frame placement without even disabling scene detection algos. So for example you can have multiple I frames for 20-30 second long segments. This way however, you force the client to cache a lot of supposedly unneeded data and the streaming starts usually after 3 first segments are fully downloaded, which might hurt mobile users' data plan more than shorter and less optimaly encoded segments would.
Everyone does multi bitrate streaming these days and it just works. That's why we too should have Dash variants working :)
Matt
12 gru 2016 18:43 "Reuben Martin" notifications@github.com napisał(a):
I think there is an big limiting factor with all the "DASH representation", "HLS Variant", and "RTMP dynamic streaming". (I think you could add this as a limitation to "MP4-SVC" as well.) The problem is (assuming I understand documentation) is that to do it correctly, the different spatial versions of the same encoding all need to have the same I-P-B frame sequence and for the segmented formats that they are all segmented the same. (Maybe the P and B frame types do not need to match, but the I-frames do) The only way I know of doing this is to lock down the frame type decisions to a static sequence, with a static segmentation size. This really sucks because it disables a lot of optimizations for both size and quality in allowing the encoder to decide frame type according to content (for instance, inserting key frames on scene changes). But without a locked down frametype sequence, there is no means of being sure that the different spatial variants produced by the encoder will all have keyframes and segmentation in the same places.
I assume most people using this are encoding using x264, and what is needed is for x264 to have a way to provide a master-slave IPC between different encoding processes so that the frame-type decisions of the master encoder processes can be copied by slave encoder processes.
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/sergey-dryabzhinsky/nginx-rtmp-module/issues/55#issuecomment-266498813, or mute the thread https://github.com/notifications/unsubscribe-auth/AB049tjj9BOkwxlrux2hyd8e-ADTbCdFks5rHYfJgaJpZM4GsGeZ .
Funny I have the same need to have some variant in my dash live stream. I quickly hack something working this afternoon (a very static one). I will review your work tomorrow MoffTigriss to make something more robust and configurable.
ut0mt8 : nice ! If you want a clean base, look that branch instead : https://github.com/MoffTigriss/nginx-rtmp-module/tree/work-dash-4 I f*cked up the manual merging with the lasts series of patchs from Sergey in work-dash-5 (the logic for the times variables changed, and the template too, and I hadn't enough time to study the implications).
Hi folks --
Sorry for the long silence on this. @MoffTigriss I'm happy to help (extra pair of eyes, etc) since it looks like you are taking ownership of the feature! I can close my PR and you can open one so we can start commenting on that, if you'd like?
Well I will try to post my solution tomorrow, but my implementation was far more simple. The idea is just to write a custom mpd in addition to some regular one. Example:
application livestream {
live on;
dash on;
dash_path /dev/shm/dash;
dash_nested off;
dash_variant _low BANDWIDTH=160000;
dash_variant _mid BANDWIDTH=320000;
dash_variant _hi BANDWIDTH=640000;
}
So we can receive 3 steam ; livestream_low, livestream_mid, livestream_hi write the 3 manifests normally, and write an extra manifest livestream.mpd which contains all the variant.
This is a bit tricky but should work. The only point is to correctly match the context name to avoid writing three time the extra mpd.
btw nested dash will need extra effort but who care ?
@joshmarshall That can be a good thing ! I need to finish the adaptation to the current dev branch status before (or if someone want to do it), but yeah, an updated PR is a good idea.
@ut0mt8 The concatenation of different manifests descriptions from simultaneous encodings is enough to have a multi-bitrate manifest ?
Not exactly : but something like this is sufficient :
...
<AdaptationSet
mimeType="video/mp4"
startWithSAP="1"
segmentAlignment="true">
<Representation
id="live_2_video_1"
mimeType="video/mp4"
codecs="avc1.42c01e"
width="640"
height="360"
bandwidth="500000">
<SegmentTemplate
presentationTimeOffset="0"
timescale="1000"
media="live_1-$Time$.m4v"
initialization="live_1-init.m4v">
<SegmentTimeline>
<S t="7713322" d="6120"/>
</SegmentTimeline>
</SegmentTemplate>
</Representation>
<Representation
id="live_2_video_2"
mimeType="video/mp4"
codecs="avc1.42c01e"
width="640"
height="360"
bandwidth="250000">
<SegmentTemplate
presentationTimeOffset="0"
timescale="1000"
media="live_2-$Time$.m4v"
initialization="live_2-init.m4v">
<SegmentTimeline>
<S t="7713322" d="6120"/>
</SegmentTimeline>
</SegmentTemplate>
</Representation>
</AdaptationSet>
This can be easily generated from one or another real stream (assuming we insert static bandwith and w/h)
Work in progress at home. I think I will have something clean to show next week.
So you can check my version at https://github.com/ut0mt8/nginx-rtmp-module/ which it basically working.
A live demo flux is available here "http://54.93.218.190/dash/live.mpd". It work/test with shakaplayer and dashjs which was quite sufficient for me.
Implementation note : this is principally copy/pasta from the hls variant code. It work the same way using the nginx configuration file to construct the variant manifest.
Some things can be easily fixed : the dash nested support is broken for now but it was trivial to add it. side question : do we need having multiple audio representation (need more variable , parsing ..)
Note : a better way to construct the variant manifest is to store/shared configuration from the different flux (using file like in the initial attempt is an idea , but I don't really like it), is there another way ?
Ahh and any review / pair of eyes are welcome.
btw I fix in a way the dash_nested option on my variant fork. It was clean, but does not separate the variant mpd wich was on the root of the path. seems clean to me.
@ut0mt8 Your fork is working perfectly for me so far! I haven't fully tested everything yet, but it's working well with the dash.js v2.4.1 reference player. I'm using a 500 kbps 640x360 _low, 1500 kbps 1280x720 _med, and 5000 kbps 1920x1080 _high streams.
Adaptive streaming is a required feature for me since many of my viewers have awful internet connections (it's the way things are in rural USA), so being able to auto-switch bitrates is awesome.
Thanks. If I have some time I had to find a better way to create the multibirate manifest. Currently I dig into to the code to handle Ad insertion from rtmp metadata to dash; and still working on implementing Common encryption (but this not as easy as I figured at the beginning)
Hello,
for me audio embedding from sub-mpd to master.mpd seems not to work. It is inside my master_hi.mpd but not in master.mpd.
First moments after start it works in sub-mpd and merged mpd but after some secounds the audio representation gets lost in merged mpd. Any ideas?
application ingest {
live on;
exec ffmpeg -i rtmp://127.0.0.1/$app/$name -map 0:0 -map 0:1 -s 1280x720 -c:v libx264 -b:v 3284k -preset medium -x264opts keyint=75:scenecut=-1 -c:a aac -b:a 192k -bufsize 3800k -f flv rtmp://127.0.0.1/dash/$name_hi -map 0:0 -s 960x540 -c:v libx264 -b:v 1800k -preset medium -x264opts keyint=75:scenecut=-1 -bufsize 3000k -f flv rtmp://127.0.0.1/dash/$name_med -map 0:0 -s 640x360 -c:v libx264 -b:v 1024k -preset medium -x264opts keyint=75:scenecut=-1 -bufsize 1500k -f flv rtmp://127.0.0.1/dash/$name_low;
}
application dash {
live on;
dash on;
dash_nested off;
dash_path /home/fhe/nginx-webroot/mpeg_dash;
dash_fragment 6;
dash_playlist_length 120;
dash_cleanup on;
dash_variant _hi bandwidth="3584000" width="1280" height="720";
dash_variant _med bandwidth="1800000" width="960" height="540";
dash_variant _low bandwidth="1240000" width="640" height="360";
}
Hum interresting. Could you provide all the mpd when it failed ? This is strange because there a no variant for audio, and it is just copied as it from the last written sub-manifest.
As a note, the method to construct the "merged" manifest is rather hacky.
It basically get all the variant from the config file, and write X time the same
It s not so easy as each stream is completely in depend process, perhaps some shared memory, but how to be sure that all the mpd are present ?
ok, i've cached every write with inotify and copied with timestamp to a different folder. And you are correct the new .mpd is written the same time as low.mpd and mid.mpd but hi.mpd with audio comes 2sec earlyer (or 3sec later). (pass1_.mpd in zip)
When i encode audio not in hi.mpd but in low.mpd (so in last ffmpeg-argument) all 4 .mpd files are written the same time but always without audio. (pass3_.mpd in zip)
And i also tried to encode and transfer audio unmuxed to a video-stream (sep_.mpd in zip)
application dash {
live on;
dash on;
dash_nested off; # this work but not separate the variant mpd
dash_path /home/fhe/nginx-webroot/mpeg_dash;
dash_fragment 6; # 2 second is generaly a good choice for live
dash_playlist_length 120; # keep 240s of tail
dash_cleanup on;
dash_variant _hi bandwidth="3584000" width="1280" height="720";
dash_variant _med bandwidth="1800000" width="960" height="540";
dash_variant _low bandwidth="1024000" width="640" height="360";
dash_variant _aud bandwidth="128";
}
OK got It. The problem is you have audio only in one representation. This is not supported as it. The simplest solution is to generate audio for all representation/ same quality. This is not a big overhead.
I definitively had to think to something more robust.
oh ok, thanks. yes it is working now with different audio files every time the mpd gets updated.
But big thanks for support and especialy for implementing this. With that and inotify i can use nginx to push dash to akamai with opensource software 👍
We have same challenge here ; having a sufficient prod ready dash implementation to serve from our origin. Goal are to be independant from any CDN solution.
Working now :
WIP :
Related to this is i located another issue. When i validate the mpd with http://www-itec.uni-klu.ac.at/dash/?page_id=605 the validator says that the max width and height of the adaptionset is not in range of the max representation resolution.
It seems that this also changes with the latest variant mpd that is parsed.
Maybe we should change issues and discussions to your fork?
This may be happen. For example if you have 3 representations with 3 different size, the variant mdp is written 3 times in a short period of time, after the update of each mpd. If you have no luck you may fetch the variant which was written by the smallest representation and so the max widht and height are obviously false. I can make a quick fix for this, using a keyword in the configuration specifying a "master" representation (the max one) and then write the variant mpd only once.
We can sure talk about this on my github fork (I just add the issue for the project).
@Bond246 Could you check my last revision, it should fix this particular issue. (note that the configuration syntax change a bit).
Now it is valid :-) really nice
Thks but I still have to rewrite this code more cleanly.
Any plans for PR? This fixes quite a few problems.
@ReubenM sure. but I think I need first to rewrite the logic before a PR. I have to take some time to do so, but as an "Infrastructure Manager" I have so much mess to deal with :/ ah and on nginx-rtmp specially I fight for a timestamp bug (from my elemental live server)
OK I have another better version on my github branch. This version has the advantage to work in all situations :-) (segments timeline can differ between representations which is very often the case).
This is still a weird implementation ; I basically write each segments timeline in a separate file, and read them from the "maximum" quality representation to write in the variant manifest. It is very similar to @MoffTigriss implementation.
I don't think it is ready for merge ; the real solution is to use shared memory. I have to dig on how to use it in ngx (I know the basic but I have to find a good structure)
Hi, Do you have any interest for this? https://github.com/joshmarshall/nginx-rtmp-module/pull/1