Open marcoscaceres opened 9 months ago
@whatwg/media thoughts?
I think this would require a fundamental change to how the algorithm works. Currently, the algorithm can pick a source
while the parser has yielded before the </video>
tag (so more source
elements can still appear). If the browser needs to have all source
s before making a selection, we need to wait until </video>
is parsed. cc @hsivonen
For the script-created elements case, we can probably do what picture
does: queue a microtask before making a selection.
If the algorithm fails to select something because all type
s are unsupported or all media
s don't match, the networkState
can be NETWORK_NO_SOURCE
and allow for resource selection to happen again if another source
element is inserted (after a microtask).
This change would probably make the spec simpler.
While arbitrary order makes sense for type
, I don't think it makes sense for media
. How do you envision it should work when media
is also used?
Example:
<video>
<source src="a" type="video/mp4" media="(min-width: 600px)">
<source src="b" type="video/mp4">
<source src="c" type="video/webm" media="(min-width: 600px)">
<source src="d" type="video/webm">
</video>
Maybe selection could be in two passes: first, select a list of candidates that has the first (in tree order) source
element where media
matches (whether the attribute is present or not), for each unique type
value. For the above example, the lists would be either ["a", "c"] or ["b", "d"]. Then, select among those in a UA-defined manner, but use the source
element without a type
attribute (if any) as last resort.
Yeah, I was also thinking this would need to be done in two passes after parsing. And yeah, I was also thinking it would simplify the spec quite a lot, but I was unsure also how much breakage there would be given how much stuff seems to happen right now in the algorithm.
It seems like the intent of this feature is to allow the browser to make the best choice for users.
In a case like
<video>
<source ...>
<source ...>
[... the server hangs for 5 seconds ...]
<source ...>
</video>
it seems like the best choice is for the browser to give up sometime early during that 5 second window, and choose from the first 2 sources. It would be bad to mandate in the spec that the browser has to wait until it sees the end tag.
Does that fit with what you all are thinking? How does it play into the discussions above about multiple passes after parsing?
it seems like the best choice is for the browser to give up sometime early during that 5 second window, and choose from the first 2 sources
I don't believe the proposal prevents this behaviour. If those first two sources aren't playable, then you would need to wait anyway.
If two sources are available and the second is preferable over the first, then the UA play the 2nd. If there's only one source available at the time you parse it and it is playable you play it.
I would limit to the two passes parsing to only the sources that are immediately available.
The 2nd parse cycle being only required if during the first parse; one of the source was playable, but the system would have liked to see if a preferred one was there (like if the available source only allowed for software decoding).
As long as this proposal doesn't involve waiting for the end </video>
tag, then that's good. Some of the discussion in https://github.com/whatwg/html/issues/10077#issuecomment-1900053331 implied waiting for the </video>
tag, so I wanted to make sure that was not part of the proposal.
Waiting for the end tag is what I had in mind, to make processing predictable and not depend on network latency. I'm not convinced it's best to allow processing of partial data here. I think speculatively fetching a video is OK, but AFAIK browsers don't do that today.
If the server stalls while parsing the video
start tag, the user also won't see a video, even if the src
attribute has been seen.
But maybe there's some heuristic we can apply to load (and play) sooner without introducing network latency impact on resource selection. For example, when seeing fallback content (non-whitespace text or elements other than source
and track
), run resource selection. A risk here is existing content might have bogus elements or text in video
(between source
elements), which would no longer work, at least if the resource selection only runs once (as today).
How are you proposing to decide a better source? Do you just want to always select the mp4?
I'm not sure how you'd make a better decision beyond some simple cases (that seem uncommon nowadays) w/o parsing and loading each source -- which could slow down loading significantly. It's not common to list codecs
in type
, so at best you're guessing which codecs and resolution would be in each source and there's significant overlap. E.g., webm -> (av1, vp9, vp8), mp4 -> (av1, vp9, h264, hevc).
How are you proposing to decide a better source? Do you just want to always select the mp4?
Not at all; we would want to select the "best" source, which could (and has) meant the one with a HW decoder available.
For context, we have previously de-prioritized HEVC on platforms containing only a SW HEVC codec, and way, way, way back in the day, de-prioritized Ogg on systems that had Perian installed as a QuickTime extension.
And of course, this behavior will work better when there is more information provided in type
. But source selection in general works better when there is more information provided in type
.
Can you clarify "works" in the case of type
w/o codecs
? I don't see how it would even work in that case w/o parsing.
Adding a <source>
without a type
already requires parsing (in the form of sniffing). This is just a more advanced form of sniffing.
There might be some privacy issues here. The site could learn more about what sort of hardware the user has.
There might be some privacy issues here. The site could learn more about what sort of hardware the user has.
MediaCapabilities already expose in much more details the information. Whichever remedial work done for MC is applicable here too (as is, disabling under some circumstances the selection algorithm)
Interesting, since the privacy and security issues are similar (not the same though) as with delayed clipboard rendering. And that one is being objected strongly because of the privacy issues
Interesting, since the privacy and security issues are similar (not the same though) as with delayed clipboard rendering.
One exposes specific user behaviour , the other could expose hardware capabilities (information already available). And for the latter we have existing policy to limit exposure covering it
A possible web compat issue is with error
events on source
elements. Currently, since the source
s are tried in order, sites may use an error
event listener on the last source
element as a signal that all resources have failed to play, and maybe decide to show an error message to the user. The spec even has an example:
https://html.spec.whatwg.org/multipage/embedded-content.html#the-source-element:event-error
If a browser then picks that source
to try first, but it's not playable, it will trigger the error message code path of the web page even though the browser still hasn't tried any other source
.
A possible mitigation could be to defer firing error
events on source
elements until all available options have failed.
Per https://webkit.org/blog/15063/webkit-features-in-safari-17-4/#source-prioritization it seems this change was shipped in Safari 17.4. Correct? If so, can you clarify what was implemented?
A (too late?) note that from an author education/understanding standpoint, the model for years has been:
srcset
: an unordered list of URLs with descriptors, provided by the author. The UA is best-suited to make decisions about which one should be picked, when, and does so, using the provided descriptors.<source>
: an ordered list of "sources" (each possibly having its own srcset
). The author is best-suited to make decisions about which one should be picked, when, and they attach explicit instructions describing the situations in which UAs should pick each.I agree that for the type-switching use case, UAs will generally make better decisions than authors. The question is less "what does this particular content and page context need" (author best-suited), and more "what does this particular user and UA context need" (UA best-suited).
Rather than changing <video><source>
to have a srcset
-like selection mechanism, it would be much cleaner to introduce srcset
+ a type()
descriptor to <video>
. Teaching and understanding these mechanisms is hard enough, and being able to transfer knowledge about how markup patterns map to mechanisms from one media element to another is great. Muddling mechanisms makes understanding, explaining, testing — authoring — much harder.
Per https://webkit.org/blog/15063/webkit-features-in-safari-17-4/#source-prioritization it seems this change was shipped in Safari 17.4. Correct? If so, can you clarify what was implemented?
This is the webkit change that implemented the policy on iOS devices only https://bugs.webkit.org/show_bug.cgi?id=267753
It does as described as an earlier proposal; if at the time of the check we have an alternative source following the one currently being checked, and if the currently checked source is using VP8 (SW only) or doesn't have hardware decoded VP9; then this source will be skipped.
If there's no source to try after, then it will be used.
full WebM support was only added in Safari on iOS 17.4, one of the reason for this decision was to reduce the potential for unexpected regressions.
(I did some digging and I think that WebKit has had the "wait for end tag" behavior for about 12 years, although seemingly it's only used for <track>
elements. This is the commit I found that seems to introduce that: https://trac.webkit.org/changeset/102968/webkit.)
Waiting for the end tag for track
s is per spec: https://html.spec.whatwg.org/multipage/media.html#text-track-model:blocked-on-parser
The code in WebKit to try source
elements is like the spec, i.e. evaluated in order without waiting for </video>
, as far as I can tell.
@jyavenard
It does as described as an earlier proposal; if at the time of the check we have an alternative source following the one currently being checked, and if the currently checked source is using VP8 (SW only) or doesn't have hardware decoded VP9; then this source will be skipped.
If there's no source to try after, then it will be used.
If WebKit doesn't wait for the </video>
end tag, then whether there is an alternative source at the time depends on where the HTML parser yields, which can depend on network conditions, right?
Note that a server hang already prevents:
track
selectionpicture
selection (it only starts when the img
is seen)The effect of not waiting for the end tag is that users will sometimes get a suboptimal format selected, for the same HTML. As I said above, I think processing should be predictable and not depend on network latency.
Yeah, that seems correct. If network conditions are bad the end user won't see media so in practice it prolly doesn't matter, but it seems reasonable to make this more solid, especially as we already have this logic for <track>
.
What is the issue with the HTML Standard?
The “resource selection algorithm” for a media element (as part of the “Otherwise (mode is children)” case) states that when a developer has listed various different formats, the browser is supposed to use the first media type that's recognized (after matching on
media=
).Tree-order based selection is problematic because developers lack sufficient information about the end user’s device/environment to make an adequate determination as to which
<source>
is most optimal, potentially leading to the wrong<source>
order appearing in a document (or the user agent being put into a situation where it has to choose a sub-optimal<source>
).Unlike developers, user agent have a greater understanding of the user’s device and environment, so are in a privileged position to choose the most optimal
<source>
that will give the best user's experience. There could be a lot of conditions where its better for the user agent to intelligently choose between formats, for example picking hardware decoding support over those that are software-only, picking one that preserves battery, or one that's higher quality / a better codec, etc.Ultimately developers shouldn’t concern themselves as to which format will be best for a given set of environmental conditions (as currently implied by the order, which could lead to a sub-optimal choice!). Instead, they can put the responsibility on the use agent to make the best choice for users.
Admittedly, we could use a little help updating that part of the spec. The current algorithm seems a little finicky with all the pointers and async waits/loops, so we could really use some help or guidance with updating it to say that after matching on
media=
and checking all thetype=
’s it supports, the user agent may choose the most suitable<source>
element based on the user’s environment, device capabilities, or optimal hardware support.NB: we are aware that the above could also apply equally to
<picture>
, but we should probably do that independently once we sort out media elements.Cc @jyavenard