Open mikedo opened 4 years ago
For broadcast, it is acceptable to carousel an entire presentation of IMSC1 or WebVTT content as a single Segment (and FFMPEG originally worked that way - the entire document in a single sample in a single Segment but with different @presentationTimeOffsets). This is not particularly efficient but permitted by -30.
Should we recommend something one way or the other here?
Yes. But we will need both. Live will require short Segments, but we should permit the post-produced large Segments for both broadcast and OTT. I planto cover these and their timing in a forthcoming contribution.
The document (Section 15) only provides timeline guidance for simple use case sidecar IMSC1 and WebVTT files. Timeline guidance for Segmented files covering Multi-Period, ad insertion and post-produced content is needed. Unlike video and audio IMSC1 and WebVTT have timing within the sample data. 14496-30 requires a certain unusual presentation time calculation related to the @presentationTimeOffset that involves a "seek" into the sample to know what timed elements in the sample are to be presented and when. For broadcast, it is acceptable to carousel an entire presentation of IMSC1 or WebVTT content as a single Segment (and FFMPEG originally worked that way - the entire document in a single sample in a single Segment but with different @presentationTimeOffsets). This is not particularly efficient but permitted by -30. I think a new section (or perhaps 2 subsections under a general section 15) are needed to provide some guidance here - timeline for Segmented IMSC1 and WebVTT; and constrained sidecar (per existing text).