servo / servo

Servo, the embeddable, independent, memory-safe, modular, parallel web rendering engine
https://servo.org
Mozilla Public License 2.0
28.56k stars 3.04k forks source link

GL acceleration doesn't work in multiprocess #24211

Open asajeffrey opened 5 years ago

asajeffrey commented 5 years ago

Viewing video with --pref media.glvideo.enabled works, yay! With --multiprocess --pref media.glvideo.enabled it produces a white screen.

Probably what's going on here is that the script thread is creating the GstGLcontext for the player, which is then being used in the gstreamer render thread, which is fine when they're in the same process, but not in multiprocess mode.

asajeffrey commented 5 years ago

IRC chat https://mozilla.logbot.info/servo-magicleap/20190912#c16611937 and https://mozilla.logbot.info/servo/20190912#c16611964.

asajeffrey commented 5 years ago

cc @ceyusa

asajeffrey commented 5 years ago

So script gets its media player https://github.com/servo/servo/blob/75bc72b29f1eb71ac81c1a53fe901ea9e9b45b20/components/script/dom/htmlmediaelement.rs#L1337-L1343

by calling ServoMedia::get() https://github.com/servo/media/blob/a70f02482d29472c5566e16ffa934fda909443bb/servo-media/lib.rs#L83-L89

which returns a per-process media back end. The script-thread's media back end is not the same as th compositor's back end, so unsurprisingly content in one doesn't show up in the other.

asajeffrey commented 5 years ago

This is pretty serious, as we can't ship a browser that's hardened against Spectre without multiprocess. cc @avadacatavra

gterzian commented 5 years ago

A fix for this is proposed as part of in https://github.com/servo/servo/issues/23807#issuecomment-526290074

The architectural sketch is that while the "audio rendering thread" should run inside a script-process, the actual media backend should run in it's own process, or in the "main process" alongside the constellation and the embedder and the compositor.

In such a setup, "starting a rendering thread" in script will be a different operation from "starting a media-backend". A "media-backend" should probably be started only once, and kept as a reference by the constellation, and then each time a script creates an audio rendering thread, it should be hooked-up with the backend via an initial workflow using the constellation, resulting in setting-up a direct IPC link of communication between the rendering thread and the backend.

gterzian commented 5 years ago

per your GL context question, we smuggle GL context pointers as usize values

I guess this is slightly different from audio rendering in the light of how GL contexts are shared with script. Could we not proxy the GL calls to the backend over IPC, versus sharing the context directly in script?

In any case, I think the overall idea would still be that the "backend" runs in a different process(probably the "main process"), from the "rendering thread", which runs in script. I guess that implies all sorts of changes for the interfaces between the backend and the rendering thread, and I have only looked at the audio part so far.

asajeffrey commented 5 years ago

Yeah, I was expecting the GL context for media to be treated like WebGL, where there is a media thread that owns the GL context, and script communicates with it via IPC.

ceyusa commented 5 years ago

While developing the GL rendering, I thought, for a second iteration, a design similar to https://github.com/servo/servo/issues/23807#issuecomment-526290074

  1. In the embedder process get the ServoMedia instance, which has a new trait method to set the GL context and the native display, which will create, if possible, the wrapped GstGLContext and keep it.
  2. The embedder process will launch a thread where all the players will be created/used/destroyed. The idea of the hash associating the player with the origin would be interesting.
  3. The IPC sender will be shared to the constellation
  4. A proxy player API in the script thread will be offered to create/use/destroy players concealing the IPC sender
  5. When a player is instantiated in the content process, it will check if ServoMedia has a GstGLContext, if so, clone it and pass it to the elements that required through the GStreamer sync bus.

That quite similiar, AFAIU, with current WebGL. What I don't like is the replication of the proxy player API.

gterzian commented 5 years ago

@ceyusa Do you think such a second iteration of the GL rendering would also have to include a general restructuring of media, including audio, or could those be separated? I guess some parts, like the equivalent of ServoMedia::get(), will require work across the board.

I haven't looked into the GL rendering at all, so I have no idea. I do have a general idea of how to split the audio backend from the rendering thread, as described at at https://github.com/servo/servo/issues/23807#issuecomment-526290074, and I see that as a prerequisite to implementing AudioWorklet.

So since restructuring the GL part, and the audio part, probably will influence each other, I'm wondering how to organize the work around restructuring of media into a backend running somewhere alongside the constellation, and a part(for audio, the "rendering thread") that would run inside script.