whatwg / html

HTML Standard
https://html.spec.whatwg.org/multipage/
Other
8.06k stars 2.65k forks source link

Add a low latency mode for OffscreenCanvas #2659

Open junov opened 7 years ago

junov commented 7 years ago

Since OffscreenCanvases do not need to synchronize their graphics updates with the rest of the DOM, there is an opportunity to provide a low latency rendering path that would allow the OffscreenCanvas to commit content into a a memory buffer that is directly scanned out to the display. This behavior should not be the default because it may result in tearing artifacts.

The reduction in latency would be a great UX improvement for painting apps that allow the user to draw using a stylus or touch interface. In such applications the presence of ~50ms of latency is enough to interfere with the user's hand-eye coordination, making the application difficult to use.

Since not all graphics hardware provide the features necessary for implementing a low-latency path (e.g. hardware overlay buffers), support may be device dependent.

annevk commented 7 years ago

cc @mstange @nical

Aside: I'm trying to create a @whatwg/canvas team. Let me know who else to add.

junov commented 7 years ago

cc @grorg @fserb

junov commented 7 years ago

API proposal:

enum CanvasPerformanceHint {
  "normal", // optimized for smoothness and high throughput
  "lowLatency", // bypass browser and OS compositing by committing to a buffer that is scanned-out and composited by the display hardware (may cause tearing).
};

partial dictionary CanvasContextCreationAttributes {
  performanceHint renderingMode = "normal";
}

Commit processing model:

In ‘lowLatency’ mode the OffscreenCanvas has a render buffer, which is where draw operations are rasterized. Calling commit() will immediately copy the contents of the render buffer to the scan-out buffer. There is no waiting for vsync and no overdraw mitigation. In the case of 2d contexts, it will be possible to track the bounding box of the portions of the canvas that have changed since the previous commit, and only update that sub-region.

Caveats:

jrmuizel commented 7 years ago

Can you expand on how you see these CanvasPerformanceHint values mapping to the different platform APIs? i.e. what are the corresponding values in DXGI_SWAP_CHAIN_DESC and arguments to Present()?

junov commented 7 years ago

We have not yet implemented (or identified) low latency paths for all platforms. On Windows, we'd likely use the DirectX hardware overlay feature [1], and do the rasterization on the CPU.

[1] DirectX overlays

jrmuizel commented 7 years ago

That API is only for D3D9 and is intended for video playback. Are there other platforms that you have already identified the low latency paths you will use? What paths are those?

junov commented 7 years ago

I don't know this part of the code well, but AFAICT, in the chromium code base we currently only have implementations for low-latency rendering buffers on two platforms: On MacOS we use IOSurfaces, and on ChromeOS there is something called a native pixmap.

freddyrpina commented 7 years ago

whatever you're doing well you are fucking update data codeing designing and building websites you are dumb rrs

annevk commented 7 years ago

@freddyrpina I'm blocking you for violating our code of conduct.

junov commented 7 years ago

Alternate idea:

partial dictionary CanvasContextCreationAttributes {
  boolean singleBuffered = false;
}

I think this is self explanatory. Do we like this better?

wffurr commented 6 years ago

I have a drawing app that also allows panning and zooming. When drawing, I want as low latency as possible, ideally by writing directly to a single-buffered hardware overlay. When panning and zooming, I want atomic updates to prevent tearing. Can the behavior of a canvas be changed at run-time or only when created?

If it's only possible when created, is there a way to share GL resources (VBO, FBO, etc.) across contexts to enable switching from a single-buffered canvas to a composited canvas?

greggman commented 6 years ago

If I understand this correctly I feel like it might be a mistake to do this

My understanding is people want to be able to render directly to the screen like a native app can. I think that's a great goal. What I don't think is a great goal is discarding the rest of the browser. If I understand this proposal you can't use any HTML with this API. Once you opt in your app is 100% responsible for rendering everything.

I'd prefer a solution that doesn't throw away the rest of the platform.

Ideas: you opt in to direct rendering, you get a callback to render, the DOM is then rendered on top of your render. You're required to render every frame in this case. If there's no renderable elements in the DOM then you get the same result but at least you don't have to throw away the entire platform to achieve the low-latency. And of course if the DOM is rendered you can't call readPixels on the screen.