Open asajeffrey opened 5 years ago
cc @ceyusa
So script gets its media player https://github.com/servo/servo/blob/75bc72b29f1eb71ac81c1a53fe901ea9e9b45b20/components/script/dom/htmlmediaelement.rs#L1337-L1343
by calling ServoMedia::get()
https://github.com/servo/media/blob/a70f02482d29472c5566e16ffa934fda909443bb/servo-media/lib.rs#L83-L89
which returns a per-process media back end. The script-thread's media back end is not the same as th compositor's back end, so unsurprisingly content in one doesn't show up in the other.
This is pretty serious, as we can't ship a browser that's hardened against Spectre without multiprocess. cc @avadacatavra
A fix for this is proposed as part of in https://github.com/servo/servo/issues/23807#issuecomment-526290074
The architectural sketch is that while the "audio rendering thread" should run inside a script-process, the actual media backend should run in it's own process, or in the "main process" alongside the constellation and the embedder and the compositor.
In such a setup, "starting a rendering thread" in script will be a different operation from "starting a media-backend". A "media-backend" should probably be started only once, and kept as a reference by the constellation, and then each time a script creates an audio rendering thread, it should be hooked-up with the backend via an initial workflow using the constellation, resulting in setting-up a direct IPC link of communication between the rendering thread and the backend.
per your GL context question, we smuggle GL context pointers as usize values
I guess this is slightly different from audio rendering in the light of how GL contexts are shared with script. Could we not proxy the GL calls to the backend over IPC, versus sharing the context directly in script?
In any case, I think the overall idea would still be that the "backend" runs in a different process(probably the "main process"), from the "rendering thread", which runs in script. I guess that implies all sorts of changes for the interfaces between the backend and the rendering thread, and I have only looked at the audio part so far.
Yeah, I was expecting the GL context for media to be treated like WebGL, where there is a media thread that owns the GL context, and script communicates with it via IPC.
While developing the GL rendering, I thought, for a second iteration, a design similar to https://github.com/servo/servo/issues/23807#issuecomment-526290074
That quite similiar, AFAIU, with current WebGL. What I don't like is the replication of the proxy player API.
@ceyusa Do you think such a second iteration of the GL rendering would also have to include a general restructuring of media, including audio, or could those be separated? I guess some parts, like the equivalent of ServoMedia::get()
, will require work across the board.
I haven't looked into the GL rendering at all, so I have no idea. I do have a general idea of how to split the audio backend from the rendering thread, as described at at https://github.com/servo/servo/issues/23807#issuecomment-526290074, and I see that as a prerequisite to implementing AudioWorklet.
So since restructuring the GL part, and the audio part, probably will influence each other, I'm wondering how to organize the work around restructuring of media into a backend running somewhere alongside the constellation, and a part(for audio, the "rendering thread") that would run inside script.
Viewing video with
--pref media.glvideo.enabled
works, yay! With--multiprocess --pref media.glvideo.enabled
it produces a white screen.Probably what's going on here is that the script thread is creating the
GstGLcontext
for the player, which is then being used in the gstreamer render thread, which is fine when they're in the same process, but not in multiprocess mode.