Open nigelmegitt opened 1 year ago
On this thread of issues I think support for embedded recordings would be extremely useful - it avoids the need for zipped / archived side-car transportation of a string of tiny audio files. As for inline or referenced - my gut feeling is that referenced gives more scope for compression (in the unlikely event that the same audio is used more than once) - but this feels an uncommon use case.
This made me think about performance issues. I think that in the server-side or authoring domain parsing and loading performance is unlikely to be a significant factor, but in a distribution/client playback scenario having a bunch of big audio resources embedded in the head of a document will mean that the parser has to get past all of those to get to the timed text data that might be the most important thing.
Parsers that wait until they've parsed the whole document won't care where within the document the embedded data is though.
I did have a similar if not so eloquent thought; it also makes any packetisation of the stream simpler for distribution.
Originally posted by @nigelmegitt in https://github.com/w3c/dapt/issues/105#issuecomment-1470390924
If we are going to support embedded audio resources, they can either be defined in
/tt/head/resources
and then referenced, or the data can be included inline.Do we need both options?
Example of embedded:
This would then be referenced in the body content using something like (see also #114):
Example of inline: