cta-wave / Test-Content

Collects information CTA Test Content
BSD 3-Clause "New" or "Revised" License
3 stars 7 forks source link

Can't reference chunks in chunked content #41

Closed FritzHeiden closed 1 year ago

FritzHeiden commented 2 years ago

With the current chunked content it is not possible to reference the individual chunks, which makes it impossible to perform the stimulus of the chunked content tests. (e.g. loading chunks in random order)

Older chunked content has individual URLs for each chunk by using $SubNumber$ in the MPD.

image See http://dash.akamaized.net/WAVE/ContentModel/SinglePeriod/Chunked/ToS_MultiRate_chunked.mpd

jpiesing commented 2 years ago

@rbouqueau I think this is one for you? It seems to be blocking 4 tests - the largest number of any github issue.

rbouqueau commented 2 years ago

@FritzHeiden What do you mean "Older chunked content" ? Do you still have a link ?

FritzHeiden commented 2 years ago

@FritzHeiden What do you mean "Older chunked content" ? Do you still have a link ?

I am sorry this is not clear. The "older chunked content" is the one I provided a screenshot and link in the original post for:

image See http://dash.akamaized.net/WAVE/ContentModel/SinglePeriod/Chunked/ToS_MultiRate_chunked.mpd

rbouqueau commented 2 years ago

Ok so this seems to be referring to content for section 8.6 and 8.7. I haven't generated content explicitly for these sections. The script for generating this content doesn't seem available, and GPAC has never been able to handle SubNumber so I guess this is a manually modified content.

If anyone knows anything about this content (e.g. who did this?), please let me know. Otherwise I'll have a look at how to generate this manually.

jpiesing commented 2 years ago

Ok so this seems to be referring to content for section 8.6 and 8.7. I haven't generated content explicitly for these sections. The script for generating this content doesn't seem available, and GPAC has never been able to handle SubNumber so I guess this is a manually modified content.

If anyone knows anything about this content (e.g. who did this?), please let me know. Otherwise I'll have a look at how to generate this manually.

I'm not aware of anyone other than you encoding content in WAVE. I wonder if this was something Fraunhofer had lying around somewhere? @FritzHeiden @louaybassbouss please can you try and think where this content could have come from as it's not from @rbouqueau ?

jpiesing commented 2 years ago

@gitwjr Bill, please add this content to the list of issues to be resolved for the release.

rbouqueau commented 2 years ago

@jpiesing It dates from 2019, whereas Rodolphe had started to work on the stream in 2020, see the modification dates.

By the way I've cross checked and a custom modification was done by the authors. If someone can have a look at their inbox to find the authors, I can contact them back.

gitwjr commented 1 year ago

@jpiesing @rbouqueau I have issue 41 added to the Detailed Tasks for Launch. However, I see it is noted above as affecting 8.6 and 8.7 whereas Louay noted in his status that Issue 41 (Chunks not detectable) affects 8.8. 8.13, 8.18 and 9.4. Does it affect all of these or are there different aspects to this issue affecting 8.6/7 from the other 4?

jpiesing commented 1 year ago

@FritzHeiden @louaybassbouss Please can you look at the comment from @gitwjr . Which tests are affected by this issue?

FritzHeiden commented 1 year ago

From the specification, all tests that use chunks are 8.6, 8.7, 8.19, 8.20, 8.22, 8.23

gitwjr commented 1 year ago

@louaybassbouss @jpiesing What is the reason for listing Issue #41 for the 4 test cases you listed? Is there something missing in the spec or perhaps are we using chunked content for splicing in the test content for those cases? The sparse matrix doesn't show which content is used for those tests.

FritzHeiden commented 1 year ago

What is the reason for listing Issue #41 for the 4 test cases you listed?

It seems like I falsely marked the tests in this list.

This is a summary about Chunked Tests :

louaybassbouss commented 1 year ago

DPCTF Testing Call 11/10/2022

yanj-github commented 1 year ago

@rbouqueau we would like to be in loop for this please? I belive we need to apply same change to audio streams as well.

jpiesing commented 1 year ago

@rbouqueau Now NAB is out of the way, is there any update on when you might be able to look at this? For us, this is the highest priority of the pending tests.

rbouqueau commented 1 year ago

@jpiesing I am still into NAB's aftermaths. I thought I would be able to do it at the end of last week. Unfortunately I was busy re-generating the content. Maybe next week.

rbouqueau commented 1 year ago

I've created some chunked content based on t16. Could anyone have a look? https://dash.akamaized.net/WAVE/vectors/cfhd_sets/12.5_25_50/chunked/2023-04-28/

FritzHeiden commented 1 year ago

I've created some chunked content based on t16. Could anyone have a look? https://dash.akamaized.net/WAVE/vectors/cfhd_sets/12.5_25_50/chunked/2023-04-28/

I was able to parse URLs to individual chunks, however, I was unable to play the video. I appended the init segment, as well as all chunks (verified by looking up the chunks directory). There is no buffered data, sourceBuffer.buffered returns 0 ranges.

jpiesing commented 1 year ago

@FritzHeiden Perhaps your colleague Daniel could share experiences of debugging MSE playback problems? I suspect he knows more about it than anyone else on this ticket.

rbouqueau commented 1 year ago

As I wrote I was clueless about how to test it. Any validation procedure is welcome.

@FritzHeiden Does the 2019 content work with your validation process?

jpiesing commented 1 year ago

Please can we just re-confirm that we are correct in mapping CMAF chunked content to \$SubNumber\$ in DASH? The former is said to be very important for low latency but nobody seems to have any experience or support for the latter.

rbouqueau commented 1 year ago

Please can we just re-confirm that we are correct in mapping CMAF chunked content to \$SubNumber\$ in DASH?

Yes but in addition I am adding a styp box at the beginning of each chunk. It do that just to imitate the sample I was given. The specs do not specify anything from what I read. Any guidance is welcome.

jpiesing commented 1 year ago

Since we seem to have chosen to make life hard for ourselves, I want to make sure that there's not an alternative which is more maintstream.

haudiobe commented 1 year ago

We discussed this during DPCTF call. Background:

Murmur commented 1 year ago

Yes but in addition I am adding a styp box at the beginning of each chunk. It do that just to imitate the sample I was given. The specs do not specify anything from what I read. Any guidance is welcome.

video: http://dash.akamaized.net/WAVE/ContentModel/SinglePeriod/Chunked/ToS_MultiRate_chunked.mpd audio: http://dash.akamaized.net/WAVE/ContentModel/SinglePeriod/Chunked/ToS_HEAACv2_chunked.mpd

This test content does not use a major or compatible brands styp=dums identifiers, specs say it shall not be carried in a first segment of sequence as a major brand(styp.major) | 2..n segments in a sequence shall use a major brand dums | Each media segment may carry dums a styp.compatible brand.

Why do styp.majorbrand=dums was introduced instead of just using a normal styp.major=msdh, styp.compatible=msdh,msix. Maybe could just use styp.compatible=msdh,msix,dums to tag all 1..n subsegments as a part of sequence encoding if need to but no mandatory for dash players.


I've created some chunked content based on t16. Could anyone have a look? https://dash.akamaized.net/WAVE/vectors/cfhd_sets/12.5_25_50/chunked/2023-04-28/ I was able to parse URLs to individual chunks, however, I was unable to play the video.

I tried several mp4info tools but crash on all segment files, init.mp4 is only file able to open. Hexeditor show 1_0_1.m4s, 1_25600_1.m4s: styp.major=msdh (no dums major or comp) , moof/mdat=5 pairs in each segment file, no sidx table. Each file probably try to use 5*0.4s moofdat chunks.

Do I read specs and discussion correctly?

rbouqueau commented 1 year ago

Thanks for taking the time to help. I think you got it right.

If I understood correctly, what currently misses is:

why multiple moofmdat pairs in a subnumber file, each file a duration of 2 seconds?

I am just re-processing the 't16' stream from the 'cfhd' WAVE Media Profile. It looked easier but if we indeed needed other features (such as sidx...) I could regenerate 't16' from scratch.

Does that look ok?

Murmur commented 1 year ago

I don't think SIDX is mandatory or really give any benefits, after all segments of sequence files most likely are small chunks so no point having a lookup table byte overhead for a very short duration single moof/mdat pair. Live encoders like it more(no sidx) when writing a progressive serialization of segment chunks.

<S t="0" d="25600" k="5" r="14"/>, timescale=12800 duration(@d) is a duration of segment sequence so each subnumber.m4s file should use an internal duration of 0.4sec moofmdat.

Then its a matter of addon script to split gpac segment files(multiple moofmdat per m4s file) to separate files decorated with styp atom value.

rbouqueau commented 1 year ago

This is exactly what I did (5*400ms chunks). The addon script is located here.

I've just made a new release that should:

  1. Fix the brands.
  2. Allow to parse the content.

https://dash.akamaized.net/WAVE/vectors/cfhd_sets/12.5_25_50/chunked/2023-05-20/

Is it better?

Murmur commented 1 year ago

https://dash.akamaized.net/WAVE/vectors/cfhd_sets/12.5_25_50/chunked/2023-05-20/

Use SubNumber 1..n index, script is writing a zero-based subnumber indexes at the moment 0_0.m4s, 0_1.m4s, .... the $SubNumber$ is replaced with the Segment number of the Segment Sequence, with 1 being the number of the first Segment in the sequence.

Subseg file: styp.major=msdh, comp=msdh,msix,dums, five moof/mdat chunk pairs, subseg file truns 512*2/12800=80ms -> chunks *5=400ms, moof/traf/tfdt.decodetime consistent increments ok. No sidx table. Segment timeline <S t="0" d="25600" k="5" r="14"/>, timescale=12800 -> duration of segment sequence 25600 / 12800 = 2000ms -> subseg file duration 2000 / 5 = 400ms. Looks consistent.

Concatenation of sequence can playback a single segment file in a normal video players fine (disclaimer: this cmdline copies an init.ftyp and also all subseg.styp tables but no problem for most players). copy /b init.mp4 + 0_0.m4s + 0_1.m4s + 0_2.m4s + 0_3.m4s + 0_4.m4s 0.mp4 copy /b init.mp4 + 25600_0.m4s + 25600_1.m4s + 25600_2.m4s + 25600_3.m4s + 25600_4.m4s 1.mp4

Encoding is an easy to decode frame sequence, each moof/mdat is IDR+P frames, most likely used only for a very conservative live stream scenarios.

frame,1,0.040000 s,N/A,1133,I,0
frame,0,0.080000 s,N/A,13141,P,1
frame,1,0.120000 s,N/A,44695,I,2
frame,0,0.160000 s,N/A,111646,P,3
frame,1,0.200000 s,N/A,115007,I,4
frame,0,0.240000 s,N/A,180504,P,5
frame,1,0.280000 s,N/A,183919,I,6
frame,0,0.320000 s,N/A,247403,P,7
frame,1,0.360000 s,N/A,250902,I,8
frame,0,0.400000 s,N/A,312067,P,9
frame,1,0.440000 s,N/A,315308,I,10
frame,0,0.480000 s,N/A,374295,P,11
frame,1,0.520000 s,N/A,377852,I,12
frame,0,0.560000 s,N/A,435491,P,13
frame,1,0.600000 s,N/A,438772,I,14
frame,0,0.640000 s,N/A,495726,P,15
frame,1,0.680000 s,N/A,498876,I,16
frame,0,0.720000 s,N/A,553804,P,17
frame,1,0.760000 s,N/A,556977,I,18
frame,0,0.800000 s,N/A,610462,P,19
frame,1,0.840000 s,N/A,613650,I,20
frame,0,0.880000 s,0.040000 s,666212,P,21
frame,1,0.920000 s,0.040000 s,669329,I,22
frame,0,0.960000 s,0.040000 s,720547,P,23
frame,1,1.000000 s,0.040000 s,723590,I,24
frame,0,1.040000 s,0.040000 s,779601,P,25
frame,1,1.080000 s,0.040000 s,782681,I,26
...

ps: Personally I like it how you write styp.major=msdh, comp=msdh,msix,dums on all subsegment files, meaning dums is only be found in a compatible field. This keeps files as normal look as possible.

rbouqueau commented 1 year ago

Thank you so much. I've updated the sub-indexing to start at 1, now available at: https://dash.akamaized.net/WAVE/vectors/cfhd_sets/12.5_25_50/chunked/2023-05-21/.

jpiesing commented 1 year ago

@FritzHeiden Can you take a look at this version?

FritzHeiden commented 1 year ago

I was able to play back the new chunked content without issues

rbouqueau commented 1 year ago

Ok, what else do I need to do?

jpiesing commented 1 year ago

Ok, what else do I need to do?

Hopefully nothing on this issue but we won't know for certain until the test HTML+JS are running the test and the OF is parsing the results.

FritzHeiden commented 1 year ago

Should I regenerate chunk tests with this content? The chunked content is not part of the database.json, so local tests are not supported

jpiesing commented 1 year ago

Should I regenerate chunk tests with this content? The chunked content is not part of the database.json, so local tests are not supported

@rbouqueau Do you know a reason why the chunked content is not part of database.json?

rbouqueau commented 1 year ago

Added. Do we need to add a tab to the front-end too?

jpiesing commented 1 year ago

Added. Do we need to add a tab to the front-end too?

Unless there's a good reason, all content should be both in database.json and in the front-end. Are there any other examples not in database.json? Encrypted content?

Murmur commented 1 year ago

May I ask what was the motivation for $SubNumber$ introduction? A need of very short segments with an addressable url could use an existing xml segment timeline with a short duration value?

There must be something else I am not seeing such as 2..n subsegments don't need to start with IDR frames or contain any IDR/I frames?

See this mpd example I renamed segments to 1..N.m4s and duration 5120/12800=400ms, DashJS can playback this url fine. It does submit rapid http requests but would happen anyway with any similar short duration scheme. Encoding was an easy to chunk as IDR start frame is found in every single moof/mdat pair.

original: https://dash.akamaized.net/WAVE/vectors/cfhd_sets/12.5_25_50/chunked/2023-05-21/ new: https://refapp.hbbtv.org/videos/dashtest/wave-41/streamb.mpd

    <AdaptationSet segmentAlignment="true" maxWidth="1920" maxHeight="1080" maxFrameRate="25" par="16:9" lang="und" startWithSAP="1" subsegmentAlignment="true" subsegmentStartsWithSAP="1" contentType="video" containerProfiles="cmf2 cfhd">
      <SegmentTemplate media="1b/$Number$.m4s" initialization="1b/init.mp4" timescale="12800">
        <SegmentTimeline><S t="0" d="5120" r="74"/></SegmentTimeline>
      </SegmentTemplate>
      <Representation id="1" mimeType="video/mp4" codecs="avc1.640028" width="1920" height="1080" frameRate="25" sar="1:1" bandwidth="4600000"></Representation>
    </AdaptationSet>    
FritzHeiden commented 1 year ago

Chunked tests are now generated. I only generated the chunked tests for 12.5, 25, 50 family, as there is no content for the others. Generated tests will be merged as soon as all other tests work with the new content.

rbouqueau commented 1 year ago

Ok, not sure to understand the latest part. Let me know if I need to generate something else.

FritzHeiden commented 1 year ago

Updated chunked content tests are now merged to master

FritzHeiden commented 1 year ago

recordings for chunked tests can be found here: https://drive.google.com/file/d/1LnNQxGHKDA8Ww9xqvoLP5Hk9i1LggxC1/view?usp=sharing

michael-forsyth commented 1 year ago

After looking at https://dashif.org/docs/CR-Low-Latency-Live-r8.pdf for how chunked content should work. These were my conclusions.

MPD elements:

Storage of segments:

How chunks are meant to be transfered to the player:

How chunks are added to the MSE sourceBuffer:

haudiobe commented 1 year ago

After looking at dashif.org/docs/CR-Low-Latency-Live-r8.pdf for how chunked content should work. These were my conclusions.

MPD elements:

  • '@availabilityTimeOffset', resync element and '@availabilityTimeComplete' should be in the mpd. (section 9.X.4.5 )
  • There is no indication that chunks should be individual addressable in the MPD by something like subNumber. (Would expect shorter segments used instead if that required)
  • Raises question of if chunked MPDs should be 'dynamic' instead of 'static' as would be expected in a low latency situation.

We are not testing any type 1 playback. Chunks are tested to test playback of chunked content, not LL.

Storage of segments:

  • cmaf chunks should be stored in their segments not as separate files.(section 9.X.2 figure 3)

We are not testing any type 1 playback. Chunks are tested to test playback of chunked content, not LL.

How chunks are meant to be transfered to the player:

  • http chunked transfer encoding. (section 9.X.2)
  • http chunk should map to cmaf chunks 1:1. (section 9.X.2)
  • end of a segment should be signaled by empty chunk. (spec rfc9112 section 7.1)

Again, we are only testing playback of chunked content, not the transfer.

How chunks are added to the MSE sourceBuffer:

  • all non empty chunks added as they arrive (Think this covers all current proposed tests for chunks)
  • Could combine chunks into segment before adding (Is valid behaviour so can be considered worth testing )

Again, we are only testing playback of chunked content, not the transfer. @louaybassbouss may have more information

michael-forsyth commented 1 year ago

"We are not testing any type 1 playback. Chunks are tested to test playback of chunked content, not LL."

I agree LL is not needed to check playback of chunked content BUT on the dash side it appears the reasonable assumption is chunking is for LL and therefore the specifications covering the chunk signaling assume LL. Therefore the question is if CTA will define its own signaling for the test media or re-use the signaling of other specifications.

"Again, we are only testing playback of chunked content, not the transfer"

The transfer is relevant for test implementation as it provides the way for the test player to distinguish between segments and chunks within the current specifications. Note making chunks individual addressable by the url arguably transforms them into segments as then the only difference between them is the minimum required sap type.

"Again, we are only testing playback of chunked content, not the transfer. @louaybassbouss may have more information"

The 'MSE sourceBuffer' is how playback is tested. There are two valid methods for how chunks are added to it. In theory the only difference in playback that the two methods should make is how close to the live edge the content can be played BUT it would not surprise me if some devices had issues with only one of the methods.

rcottingham commented 1 year ago

@haudiobe @rbouqueau @louaybassbouss Hi Thomas, Romain, Louay - please can review Mike's response and questions above (following Thomas'). We need some clarifications before generating chunked audio (aac/ac4/e-ac-3). Many Thanks, Richard.

rbouqueau commented 1 year ago

I don't really feel untitled to comment on the two last paragraphs. On the first one I agree with Thomas that there was a misunderstanding about LL (which this test has not been not about).

haudiobe commented 1 year ago

"We are not testing any type 1 playback. Chunks are tested to test playback of chunked content, not LL."

I agree LL is not needed to check playback of chunked content BUT on the dash side it appears the reasonable assumption is chunking is for LL and therefore the specifications covering the chunk signaling assume LL. Therefore the question is if CTA will define its own signaling for the test media or re-use the signaling of other specifications.

We have agreed to use the signaling as defined. There were no other proposals. The MPD is really just for annotation of test content. I proposed this in the absence of other proposals.

"Again, we are only testing playback of chunked content, not the transfer"

The transfer is relevant for test implementation as it provides the way for the test player to distinguish between segments and chunks within the current specifications. Note making chunks individual addressable by the url arguably transforms them into segments as then the only difference between them is the minimum required sap type.

While I am not disagreeing on the fact, the issue is, we are NOT testing delivery in the first version. I had some recent discussion to add delivery or even type 1 (player testing) and this is an interesting thought, but for the next version,

"Again, we are only testing playback of chunked content, not the transfer. @louaybassbouss may have more information"

The 'MSE sourceBuffer' is how playback is tested. There are two valid methods for how chunks are added to it. In theory the only difference in playback that the two methods should make is how close to the live edge the content can be played BUT it would not surprise me if some devices had issues with only one of the methods.

Yes, please propose a new test if you feel more needs to be tested.

Good conversation, and lot's of food for future opportunities

rbouqueau commented 1 year ago

My understanding is the initial issue was addressed. May I ask that we close it and that the side discussion at the end is migrated to a new issue if that makes sense?