Closed gregwhitworth closed 5 years ago
We discussed this today and we decided the following:
device-pixel-border-box size
is just another word for border-box
but for the canvas element when in a WebGL context for reporting device pixels since they can't be rounded.
In order to keep the API clean, we're recommending removing this as a value and just doing the right thing for canvas by reporting the device pixels alongside the CSS pixels under new properties.
The CSS Working Group just discussed Device pixel border box removal from spec
.
The CSS Working Group just discussed ResizeObserver device-pixel-border-box
.
I think this is proposing to make observable some details of the Chromium transforms implementation and how it interacts with painting that aren't specified anywhere. I'm not sure we want to make those things observable -- but if we do, we definitely need a spec for them so that other browsers can match them.
To summarize the very large thread above, the request was placed on @atotic to create a new box with a better name and how the resulting API shape would change, if any.
In addition, there was discussion around when pixel snapping is done which seems to vary to some extent (pinch zoom for example) between engines and will need to be addressed (although not necessarily in the RO spec).
The problem of not being able to know exactly how many device pixels a given Canvas covers exists in all browsers, not just Chromium. All browsers' layout engines and compositors do some sort of rounding or snapping, and all of them do it subtly differently. For this reason it is not possible to create pixel-accurate canvas content. There are some longstanding bugs filed against WebGL on this topic:
https://github.com/KhronosGroup/WebGL/issues/2460 https://github.com/KhronosGroup/WebGL/issues/587
In the past the WebGL WG discussed adding properties to the canvas exposing this information, but doing so is sub-optimal. If the surrounding elements of the canvas changed, layout needs to occur in order to compute the values of these properties, so fetching them would induce a synchronous layout, and could induce infinite loops in applications. @grorg pointed this out years ago and there was no good solution.
ResizeObserver is a great innovation and a great place to put these additional observable properties. It allows the application to respond asynchronously, and 100% correctly, to resizing of the canvas. @atotic prototyped these new observable properties in Chromium and the results were excellent - they finally allowed perfectly pixel-accurate canvas content to be drawn.
Could we please reconsider adding device-pixel-border-box as an observable property? If refinements are needed we're happy to collaborate on them. Thanks.
@kenrussell I don't believe that's what @dbaron was meaning. Yes pixel snapping occurs and this will vary by browser as it isn't specified anywhere, nor is subpixel rounding. What David was getting at is when pixel snapping is done and if it includes transforms or not, Chromium does not include transforms - thus if the element is transformed the dimensions snapped may be incorrect if it's a position+size altering transform as it will result in different rounding which isn't observed by ResizeObserver which primarily captures Layout adjustments.
That said, while device-pixel-border-box
is not currently in the spec, this issue is to work out how to go about it and what the tradeoffs are.
So to put it another way, are you ok with that tradeoff, being able to get device pixels in the general case but some authors may not if they apply certain transforms?
In particular, it sounds like, in Chromium, the pixels being snapped to are a function of what does or doesn't have a layer in the compositor... which means that this is exposing that, and thus likely causing some sites to depend, for correct behavior (not just for performance), on particular layerization. I think Gecko might pixel snap across rectilinear transforms (translate and scale only), whereas it sounds like Chromium doesn't.
Sorry for the delay replying.
It's fine with me personally - and I suspect for most web authors - if this API works most reliably and portably only if no size-altering or rotational transforms are applied to the canvas. That should be the case for applications that are really trying to render 1:1 to device pixels, since I doubt they will apply any transformations to the canvas.
I don't know Chromium's compositor and it would be good to get confirmation from someone on that team that this direction is OK with them. Not sure who on that team watches this repo but I see that @chrishtr can be CC'd; Chris, do you have any feedback or suggestions?
Perhaps we can further specify and standardize the behavior as we get experience with it, and as multiple browsers implement it? It would be great to start somewhere.
Re compositing: in Chromium, it is not the case that the pixel snapped size depends on compositing. It depends only on the local layout size of the element.
Re pixel snapping across transforms: Chromium pixel-snaps across rectilinear translations, but
not scales. It also does not pixel-snap across CSS containment isolation.
(ref: FragmentPaintPropertyTreeBuilder::UpdateForPaintOffsetTranslation
in the code)
The pixel-sizing of the canvas depends on pixel snapping, but is not scaled up to include scales of ancestor transforms.
This means that the device-pixel-border-box
will change if certain ancestor transforms change.
device-pixel-border-box
also depends on paint offset generally.
I think we should just specify that if a ResizeObserver
is configured to observe device-pixel-border-box
, it needs to take into account all such sources of rounding and difference like I enumerated
above. This is more expensive, and so should be an opt-in only for developers who need it.
The WebGPU community group discussed this at length in a recent face-to-face meeting, with WebGL working group members also present. This ResizeObserver feature is strongly desired for both graphics APIs. People principally involved in the discussion were @atotic (Google), @grorg (Apple), @litherum (Apple), @jdashg (Mozilla) and @kenrussell (Google). Perhaps @Kangz (Google) can provide a link to meeting minutes.
High-level takeaway is that there is general consensus between Safari, Firefox and Chrome that:
(Please correct me if I've misrepresented the conclusions of this discussion.)
Is this feedback sufficient to help move this proposal forward? @atotic pointed out during the meeting that ResizeObserver v0 is implemented in all of Safari, Firefox and Chrome now, and we would collectively love to try prototypes of device-pixel-border-box in all of these browsers.
WebKit's primary objection to this API is that in WebKit, device-pixel snapping is a paint-time operation and the snapped rectangle width is a function of the rectangle origin. So to provide the correct information to ResizeObserver (which fires before paint), we'd have to do a fake paint just to get the right device-pixel size, which we don't want to have to do.
It should be sufficient for ResizeObserver to be eventually consistent, so a frame of latency would be acceptable, though it sounds like the spec doesn't presently allow this behavior.
FWIW, I'm playing with a pixel-pre-snapping resize: canvas.style.width = canvas.width / window.devicePixelRatio
, which appears to work well on at least Windows. It does, of course, depend on an assumed pixel snapping behavior, which may be insufficient to work on all platforms. (But theoretically, the behavior seems sound for intuitive pixel snapping)
device-pixel-border-box does satisfy this use-case, but it seems incredibly specific. Perhaps it needs to be because of layout rules I'm not familiar with, but @dholbert indicated that this should work in at least some cases. I'm following up with our Painting/Compositing people.
A related discussion on the WebGL repo: https://github.com/KhronosGroup/WebGL/issues/587
It should be sufficient for ResizeObserver to be eventually consistent, so a frame of latency would be acceptable, though it sounds like the spec doesn't presently allow this behavior.
Re "a frame of latency would be acceptable": are you saying you think that it's ok for the canvas to draw wrong for one frame and then fix itself? If so, I don't agree. Fixing this is the point of ResizeObserver after all..
FWIW, I'm playing with a pixel-pre-snapping resize:
canvas.style.width = canvas.width / window.devicePixelRatio
, which appears to work well on at least Windows. It does, of course, depend on an assumed pixel snapping behavior, which may be insufficient to work on all platforms. (But theoretically, the behavior seems sound for intuitive pixel snapping)
This does not work, because pixel snapping on browsers takes into account position as well as device pixel size. This is necessary in order to have consistent and high-quality rendering when a canvas is contained within layout boxes, and between layout phases of a box. Subpixel layout and pixel snapping are necessary and inherent parts of high-quality DOM rendering.
device-pixel-border-box does satisfy this use-case, but it seems incredibly specific. Perhaps it needs to be because of layout rules I'm not familiar with, but @dholbert indicated that this should work in at least some cases. I'm following up with our Painting/Compositing people.
I don't think this is incredibly specific at all. The reason canvas is special is that it's an immediate-model API that draws directly to a backing, as opposed to CSS+HTML rendering, which is mediated by the browser. Therefore the canvas needs to know exactly how big the backing is.
However, the canvas is not rendered in isolation, but rather is an immediate-mode rendering contained within a browser-mediated DOM that has subpixel rendering, pixel snapping, and render timing that is UA-controlled. For this reason, the developer needs a way for the canvas to participate in the rendering lifecycle, via an appropriate callback once the size of the canvas backing is known. The correct timing of this is once layout is complete and before/during paint.
This is very similar to the basic ResizeObserver use-case, except that the canvas needs to also observe the pixel snapping inputs to paint. Therefore, as Simon mentioned, "part of" paint needs to run, in order to compute paint offset, which is an input to the pixel snapping algorithm. I agree that this is more work, but it's necessary for high-quality rendering, and is not that hard to implement.
For reference here are @kenrussell's notes of the discussions in the WebGPU CG.
If part of paint needs to run, it's a tough pill to swallow to duplicate that work, which is why a frame of latency allowance helps.
Canvas cannot possibly stay 1-1 throughout a CSS Animation, so we know we're already working on an imperfect solution.
Ideally, a pre-snapped rect will be exactly N pixels tall regardless of offset, unless we round the edge y0 and y1, instead of y0 and height. Anything else will require spurious resizes, which not only do we not want in webgl, but we don't want in general web content painting. Is my intuition leading me astray here?
If part of paint needs to run, it's a tough pill to swallow to duplicate that work, which is why a frame of latency allowance helps.
Chromium's implementation does not duplicate any work. There is a pre-paint rendering lifecycle phase that is after layout and before paint, and among other things computes paint offset.
Canvas cannot possibly stay 1-1 throughout a CSS Animation, so we know we're already working on an imperfect solution.
A CSS animation for anything other than transform or opacity induces layout as necessary, which runs the ResizeObserver on every frame. Transforms, as mentioned by Ken above, are not taken into account for the device-pixel-border-box, and so is not a problem. Transform animations without layout/ResizeObserver is one motivating reason for this rule, in addition to the ones Ken mentioned.
Ideally, a pre-snapped rect will be exactly N pixels tall regardless of offset, unless we round the edge y0 and y1, instead of y0 and height. Anything else will require spurious resizes, which not only do we not want in webgl, but we don't want in general web content painting. Is my intuition leading me astray here?
Suppose you have a canvas inside of a div of exactly the same CSS pixel size. The border of the div should match up exactly to the edge of the canvas. For this reason, the div and canvas have to have the same rounding algorithm.
As for why the rounding needs to depend on position: consider two sibling divs that have fractional widths and paint offsets and need to touch each other with no gap. In this case the rounding needs to be consistent, and the way to achieve that is to include subpixel accumulation (which in turn depends on fractional paint offset) in the rounding, so that they round the same direction. This is why the rounding depends on paint offset.
Can we pre-snap the position and the size? Alternatively, what about getDeviceRects?
Relying on observers for this seems likely to cause unexpected observer callback ordering issues.
Can we pre-snap the position and the size?
How would this work?
Alternatively, what about getDeviceRects?
Do you mean a new method exposed to script that returns these rects? The problem is that the canvas doesn't know when to ask, since it doesn't known when it might change due to layout invalidations.
@jdashg can you reply to @chrishtr 's questions?
Being able to observe this device-pixel-border-box size would categorically solve a longstanding rendering quality problem on the web platform. Since ResizeObserver is an asynchronous API, it solves problems where providing these attributes on the canvas element forces a synchronous layout.
In Safari, would it eventually be possible to implement this?
Is there any way we can reach consensus that this is a needed feature, and that ResizeObserver is a good place to expose it to applications?
Sure! As for presnap of position and size, I updated my testcase to pre-snap the top/bottom/left/right coords, and with the exception of what seems like a defect in odd-device-pixel handling on Chrome+Mac (as mentioned in KhronosGroup/WebGL#587), this approach works (as far as I can tell!) for ensuring on-demand 1-1 webgl backbuffer resizing on all desktop browsers today.
Behavior on Android still seems anomalous, but I have less interest in debugging that at the moment.
However, apps with unpredictable relayouts, who additionally don't redraw every frame, could rely on ResizeObserver for events instead of polling every rAF. It just doesn't seem to be a hard requirement for 1-1 webgl rendering for most apps.
I have updated WebGL's wiki page on HighDPI best-practices to include this new approach based on existing primitives.
"a frame of latency would be acceptable": are you saying you think that it's ok for the canvas to draw wrong for one frame and then fix itself?
If part of paint needs to run, it's a tough pill to swallow to duplicate that work, which is why a frame of latency allowance helps.
Alternatively, what about getDeviceRects?
Only being able to obtain the pixel perfect guaranteed size of a canvas asynchronously via an event does not strike as a good enough solution. That will lead to all sorts of crazy patterns where one cannot properly specify the size of a canvas in one block of code, but must do a two stage algorithm to change the size.
Further, the upcoming OffscreenCanvas specification may make such types of two-passers even more difficult, since a Worker cannot know the CSS size of a canvas, so a GL app running in a Worker will already need to postMessage to get the CSS size; ultimately this may become a jungle of async CSS size and device pixel size events ping ponging to get something as innocuous as setting up a canvas size right. Async events tend to lead to needing to carefully audit race conditions with other program flow, especially if user interactions are in the loop, which they often are (enter/exit fullscreen, resizing browser window).
I'd very much be in favor of looking for a canvas.getDeviceRects()
style of synchronous API that one could use and be done with it. If I understood the conversation above correctly, I find it odd that "browser may need to relayout" was stated as a reason to not be able to implement such synchronous API - after all the .getBoundingClientRect() API already exists and that is not asynchronous either? (Iiuc calling .getBoundingClientRect()
will require resolving a relayout if one is pending?)
Only being able to obtain the pixel perfect guaranteed size of a canvas asynchronously via an event does not strike as a good enough solution. That will lead to all sorts of crazy patterns where one cannot properly specify the size of a canvas in one block of code, but must do a two stage algorithm to change the size.
Further, the upcoming OffscreenCanvas specification
It's not upcoming, this is already a supported feature in Chrome. :)
That being said, since an OffscreenCanvas in a worker is in a different thread, it is not possible to resize it perfectly in concert with the rest of the page, when resized. What should be done is that a resize observer for it should postMessage the resulting device pixel border box to the worker, causing a subsequent resize + redraw of the canvas texture.
If I understood the conversation above correctly, I find it odd that "browser may need to relayout" was stated as a reason to not be able to implement such synchronous API - after all the .getBoundingClientRect() API already exists and that is not asynchronous either? (Iiuc calling
.getBoundingClientRect()
will require resolving a relayout if one is pending?)
The problem with such an API is that: (a) there is no time the developer can call it and be sure that it's right. getDeviceRects() would clean layout, but it would clean a layout that is not actually what is drawn on screen, and (b) there is no method other than polling to know when to check that it might have changed.
An example of (a) being hard is that the inputs to layout may change after the call to getDeviceRects(), but before javascript yields to rendering:
canvas.getDeviceRects(); ... divContainingCanvas.style.width = '200px'; .. yield to rendering
In a large, multi-widget application such patterns are common.
OffscreenCanvas is "upcoming" until multiple implementations support it. Until then, it's premature to call it a finished spec, regardless of spec document status. (A web that only works in Chrome is not the web)
Likewise, the effort here is to cooperate on a direction amongst the implementations.
I disagree with your characterization of (a), as the same issue can happen with the resize observer. (b) is accurate, absent a window.devicePixelRatio observer.
However, the ergonomics of an explicit poll are much better for most apps, in particular much easier to integrate into existing codebases. If an app relayouts after getting a rect, that's on them, just as it's on them when they relayout after canvas resize today. "Move your resize into a ResizeObserver event" sounds good but in practice tends to be a pain. As echoed by @juj, non-trivial apps have a preference for a simpler more explicit API here.
As such, explicit poll is my preference, even if we also add an event observer. Requiring use of the event observer seems worse. (Additionally, the explicit poll is easier to implement, and thus easier to ship quickly)
@juj @jdashg a polling API introduces all sorts of problems like forcing a layout, as @chrishtr has shown, which can easily cause major performance and correctness issues in applications. @grorg also raised this issue early on while the WebGL working group was considering adding new properties to the canvas element providing the size in device pixels.
ResizeObserver has solved this problem elegantly - the ability to observe resizes of individual HTML elements, which has been a longstanding missing feature on the web.
Given the problems with adding synchronous polling APIs for this functionality, can you support adding this functionality to ResizeObserver? I don't think there's any application which resizes its elements on a continuous basis - it would be too jarring for the user. The scenario that needs to be addressed is getting accurate, steady-state measurement of the canvas's size in device pixels, and adding this device pixel box size to ResizeObserver will do it. I think that in both existing JS code bases as well as WebAssembly compiled applications it should be feasible to integrate the ResizeObserver-based solution. For the entire time OpenGL has been available as an API, GLUT's callback-based glutReshapeFunc has been the way to respond to window resizing in OpenGL applications.
If we can simply reach agreement that this small addition is an acceptable step forward, then we can collectively make a lot of progress with client applications. Thanks for considering this.
OffscreenCanvas is "upcoming" until multiple implementations support it. Until then, it's premature to call it a finished spec, regardless of spec document status. (A web that only works in Chrome is not the web)
Ok, sure.
Likewise, the effort here is to cooperate on a direction amongst the implementations.
Yes, of course. Agreed.
I disagree with your characterization of (a), as the same issue can happen with the resize observer.
Can it? Is there an example in this thread that I missed? (If so, apologies.)
I agree that transform animations can't do it, but that is why I think the rect needs to not include transform.
An example of (a) being hard is that the inputs to layout may change after the call to getDeviceRects(), but before javascript yields to rendering:
canvas.getDeviceRects(); ... divContainingCanvas.style.width = '200px';
In this example, the problem is not that the .getDeviceRects()
API exists, but the second line: developer changes the size after querying the device pixel size. (what would developer expect to happen anyways?) With same rationale(?), one could claim that having e.g. a synchronous String.length
API is bad since developer can change the string afterwards, leading to String.length
to return the wrong value. The only way to fix this would be to move to purely functional programming.
Having a resize event observer is fine and great, being able to get events from when a size changes is much nicer than polling, I don't argue against that, but I don't see why that would prevent having a .getDeviceRects()
API?
@juj @jdashg a polling API introduces all sorts of problems like forcing a layout, as @chrishtr has shown, which can easily cause major performance and correctness issues in applications.
I see forcing a layout being the correct thing to do, and not a problem. In 99% of applications with this canvas use case by the time the WebGL context is being initialized the DOM has already been set up and the DOM content is static. Relayouting would not do anything in such case, and even if it has to relayout right there on the spot, I don't see it correctness issues. If the API is forced to be async, it will be the async nature that will cause performance and correctness issues. If I have code e.g. like
var canvas = new Canvas();
canvas.style.width = '100%';
canvas.style.height = '100%';
document.body.appendChild(canvas);
var deviceSize = canvas.getDeviceRects();
canvas.width = deviceSize.width;
canvas.height = deviceSize.height;
var ctx = canvas.getContext('webgl');
requestAnimationFrame(...);
there does not really exist a more performant, correct, unambiguous nor self-documenting way to initialize the rendering backbuffer size and the GL context than the above code. It has a good guarantee that no badly sized temp GPU backbuffer resources will be allocated. Having a separate resize observer event to allow reacting to when user resizes the page or similar is then great as well (is the existing 'resize' event not already usable for that purpose?)
If we need to have an async event to get the proper size, we will end up with tons of code out there that first initialize the canvas with a framebuffer with size 1920x1079 or 1921x1080 or something random off-by-one like that, then render one or more(?) frames with wrong size, then get the event, then fix up the framebuffer to proper 1920x1080 size. That will have worse performance than the code example above. Or we place an implicit requirement to codebases that initializing a GL context on a canvas is practically an async operation if you don't want temp memory pressure or silly temp init work, which will lead to more complicated init sequences and correctness bugs from developers.
Also e.g. transitioning to fullscreen would become an asynchronous operation, where one would .requestFullscreen()
(as response to user input) either have to pause rendering until the resize event is received, or render with incorrect backbuffer size for unspecified number of frames, leading to possibly glitched visuals.
I don't think allocating these kind of temp framebuffers on the GPU would be exactly free memory-wise either, and GPU memory intensive applications might run into GPU OOMs because they allocated temporary off-by-one framebuffers on the GPU.
In this example, the problem is not that the
.getDeviceRects()
API exists, but the second line: developer changes the size after querying the device pixel size. (what would developer expect to happen anyways?) With same rationale(?), one could claim that having e.g. a synchronousString.length
API is bad since developer can change the string afterwards, leading toString.length
to return the wrong value. The only way to fix this would be to move to purely functional programming.
I think the String analogy you gave is not all that useful in this case. The difference is that the DOM is laid out according to a global constraint algorithm that has many non-local effects that developers cannot feasibly predict in all cases. Even for the case of changes to the constraints that are introduced by the developer, in a large and dynamic web application it's very difficult to predict every single case that a size or offset of an element has changed. This is extra true because pixel snapping is (intentionally) not spelled out in the web specs.
In addition, there are cases where layout occurs even without the developer doing anything. Examples include the user resizing the page or changing zoom factors (you mentioned the former in your earlier comment, I know). Another example is a canvas embedded in a cross-origin iframe that has no idea what its containing frame might do, because that frame is controlled by a third party.
Having a resize event observer is fine and great, being able to get events from when a size changes is much nicer than polling, I don't argue against that, but I don't see why that would prevent having a
.getDeviceRects()
API?
It's because the return value of getDeviceRects is incorrect and misleading. There is also the problem of deciding whether to return a stale value from the prior frame, or an "up to date" (but not really) value that forces layout. The former doesn't work in cases where the canvas has just been resized since the last frame, and the latter doesn't work if something will happen after the call to getDeviceRects and before the next render to the screen. In addition, the forced layout caused by the latter approach is bad in and of itself, because forced layouts interfere with efficient pipelining of the rendering system (multi-threading, for example). And finally, polling is required on top of getDeviceRects, which is fundamentally worse than an observer which is called only when things actually change.
I see forcing a layout being the correct thing to do, and not a problem. In 99% of applications with this canvas use case by the time the WebGL context is being initialized the DOM has already been set up and the DOM content is static.
As I mentioned above, I don't think this is accurate. Most web pages are quite dynamic these days, and even for cases where they are not, the user can cause layout changes.
there does not really exist a more performant, correct, unambiguous nor self-documenting way to initialize the rendering backbuffer size and the GL context than the above code.
Code like the above is what sites already do, except for the getDeviceRects part. Instead they take the CSS sizing and multiply by devicePixelRatio. For sites that want to always be exactly aligned to device pixels no matter what, they can use a ResizeObserver to adjust when needed.
It has a good guarantee that no badly sized temp GPU backbuffer resources will be allocated. Having a separate resize observer event to allow reacting to when user resizes the page or similar is then great as well (is the existing 'resize' event not already usable for that purpose?)
I agree that avoiding re-allocating GPU buffers is a good thing. Developers can reduce this by allocating the canvas buffer in a ResizeObserver at start, and only re-allocating it in a ResizeObserver subsequently.
ResizeObsever is for sure not a fully "direct" API, but it's the consequence of embedding an immediate-mode API like Canvas into the retained mode + inversion-of-control system that is HTML+CSS. Developers must do a certain amount (and really not all that much in this case) of work to adapt to this paradigm vs a fully immediate-mode they may be used to from prior experience.
Talked with Ken a bit more about ResizeObserver, and it sounds like it can accommodate the use case that we have at Unity.
One question I have is regarding how proper WebGL context creation now looks like? I.e.
Developers must do a certain amount (and really not all that much in this case) of work to adapt to this paradigm vs a fully immediate-mode they may be used to from prior experience.
so is the new proper paradigm to create a WebGL context
<html>
<body>
<canvas id='canvas' style='width: 100%; height: 100%'> </canvas>
<script>
var canvas = document.getElementById('canvas');
new ResizeObserver(entries => {
var size = entries[0].devicePixelBorderBoxSize;
canvas.width = size.width;
canvas.height = size.height;
var ctx = canvas.getContext('webgl');
// init game engine/game/app logic here:
initRendering(ctx);
}).observe(canvas, { box: 'device-pixel-border-box' });
</script>
</body>
</html>
does the above work?
Another question was how about if the JS code needs to be loaded dynamically? E.g. applying the above to the current structure of Emscripten compiled WebAssembly applications would look like:
<html>
<body>
<canvas id='canvas' style='width: 100%; height: 100%'> </canvas>
<script>
function downloadScript(url) {
return new Promise((ok, err) => {
var s = document.createElement('script');
s.src = url;
s.onload = () => { ok(); };
document.body.appendChild(s);
});
}
downloadScript('page.js');
page.js:
var canvas = document.getElementById('canvas');
new ResizeObserver(entries => {
var size = entries[0].devicePixelBorderBoxSize;
canvas.width = size.width;
canvas.height = size.height;
var ctx = canvas.getContext('webgl');
// init game engine/game/app logic here:
initRendering(ctx);
}).observe(canvas, { box: 'device-pixel-border-box' });
</script>
</body>
</html>
Does that work? I.e. will observing the resize cause an observe event to be fired on the next frame?
Or does the user need to dynamically create the canvas element with
var canvas = new Canvas();
canvas.style = 'width: 100%; height: 100%;';
new ResizeObserver(......).observe(canvas);
document.body.appendChild(canvas);
in order for the resize event to be observable for the first time?
<html> <body> <canvas id='canvas' style='width: 100%; height: 100%'> </canvas> <script> var canvas = document.getElementById('canvas'); new ResizeObserver(entries => { var size = entries[0].devicePixelBorderBoxSize; canvas.width = size.width; canvas.height = size.height; var ctx = canvas.getContext('webgl'); // init game engine/game/app logic here: initRendering(ctx); }).observe(canvas, { box: 'device-pixel-border-box' }); </script> </body> </html>
does the above work?
The game initialization logic should be factored out from the ResizeObserver, or at least, check to see if it's already been run before running it again. Ideally only resizing of the back buffer would be handled in the ResizeObserver's callback.
If this approach is documented, then in the same place it also needs to be documented that apps might want to run at lower resolution and could observe a different box via ResizeObserver for that purpose.
Another question was how about if the JS code needs to be loaded dynamically?
...
Does that work? I.e. will observing the resize cause an observe event to be fired on the next frame?
That's a good question. Per these docs: https://drafts.csswg.org/resize-observer-1/ https://developers.google.com/web/updates/2016/10/resizeobserver
it looks like yes, if observation starts, the element is being rendered, and its size is greater than (0,0), the observer will fire. So the canvas can be placed in the page rather than needing to be dynamically created.
It seems to me that ResizeObserver is more or less polyfillable given some reliable getBoundingDeviceRect(). (which I've previously largely implemented on top of getBoundingClientRect and devicePixelRatio)
In particular, there doesn't seem to be a huge difference logically between using ResizeObserver and calling getBoundingClientRect at the top of RAF, given that to be useful the same frame, ResizeObserver must fire /before/ RAF.
Is there a logical difference I'm not seeing?
In particular, there doesn't seem to be a huge difference logically between using ResizeObserver and calling getBoundingClientRect at the top of RAF, given that to be useful the same frame, ResizeObserver must fire /before/ RAF.
ResizeObserver callbacks are not called before rAF callbacks. They are called after all rAFs are done, after layout, and before drawing to the screen. If the ResizeObserver callback mutates DOM state, then layout is re-computed, after which ResizeObserver callbacks are called again. This may happen multiple times (though the number of times is bounded by the DOM depth).
Because ResizeObserver is called after layout and before drawing, it has the opportunity to do things like resizing canvases, without any lost frames, no matter what the layout change was that resulted in the need for canvas resize.
If ResizeObserver is called after RAF, then there's a frame of latency in the WebGL case we've discussed. Is this intentional?
I’ve added this issue to the agenda of the next CSSWG call and also to TPAC.
If ResizeObserver is called after RAF, then there's a frame of latency in the WebGL case we've discussed. Is this intentional?
There is not an extra frame of latency. It’s true that if a WebGL app updates its canvas rendering state in a rAF callback, and then a ResizeObserver for in the same frame causes the canvas to resize, the WebGL app may need to update its state right then and there, but it can do it. And it will be displayed to the screen in the same frame as the rAF callback.
There's simply not time to re-render a webgl frame in response to a late resize observer event.
For the purposes of a preponderance of WebGL content, if the resize event comes after RAF if it comes at all, that's a frame of latency.
As such I don't see a robust approach here for WebGL's needs that isn't getBoundingDeviceRects.
I can join the next CSSWG call.
There's simply not time to re-render a webgl frame in response to a late resize observer event.
For sure there is less time before the next vsync.
For the purposes of a preponderance of WebGL content, if the resize event comes after RAF if it comes at all, that's a frame of latency.
Yes, the frame is delayed, and if it misses vsync there is an extra frame’s delay. I thought you were talking about wrong frames, not delays to a frame.
With a ResizeObserver
approach, on the frames where the canvas happens to resize (which are rare), then two WebGL rendering updates will occur and the frame will take longer to draw to the screen. However, all other frames are unaffected. And on top of that, there will never be a wrong frame (defined as a frame where the WebGL drawing does not properly match the sizing of the canvas element.
With a getBoundingClientRect * devicePixelRatio
approach, or even if getBoundingDeviceRects
were added, in simple situations you can indeed call getBoundingDeviceRects
during a rAF callback and resize & re-draw the canvas. However, wrong frames are impossible to avoid in general, because of situations like:
I do not think these situations are uncommon in complex apps. When encountered, the only way to avoid them is try to coordinate all activity on the page, leading to lots of complexity for the developer. Or in many cases, they will just accept a worse web app and the web is worse for it. ResizeObserver
exists to solve this exact problem, in a decentralized way, that is pretty easy to use.
Further, if the canvas is quiescent and not updating its drawing, there is no solution to detecting when it resizes, except polling or adding a new resizing event callback, which suffer from the problems above as well. ResizeObserver
also solves this problem with no additional work for the developer.
I can join the next CSSWG call.
Great, thank you.
The CSS Working Group just discussed device-pixel-border-box size
.
One note is that I think (although being sure requires executing the entire ResizeObserver
spec in my head) calling ResizeObserver.observe()
on the canvas element in question on every requestAnimationFrame
should lead to a reliable ResizeObserver
notification every cycle, whether the canvas has resized or not. That might be a path forward for @jdashg's use case.
One note is that I think (although being sure requires executing the entire
ResizeObserver
spec in my head) callingResizeObserver.observe()
on the canvas element in question on everyrequestAnimationFrame
should lead to a reliableResizeObserver
notification every cycle, whether the canvas has resized or not. That might be a path forward for @jdashg's use case.
Just to repeat what I said in the call also: that would indeed work and be fine, as long as you could observe the device-pixel border box of the canvas. Implementation for the developer would be:
function observerCallback(entry, observer) {
let devicePixelBorderBoxRect = entry[0].devicePixelBorderBox;
render(devicePixelBorderBoxRect); // WebGL rendering
observer.unobserve(myCanvas);
observer.observe(myCanvas, {box: 'device-pixel-border-box'});
}
var observer = new ResizeObserver(observerCallback);
observer.observe(myCanvas, {box: 'device-pixel-border-box'});
@jdashg would this meet your needs?
@christr: I think you meant
function observerCallback(entry, observer) {
let devicePixelBorderBoxRect = entry[0].devicePixelBorderBox;
render(devicePixelBorderBoxRect); // WebGL rendering
}
function rafCallback() {
observer.unobserve(myCanvas);
observer.observe(myCanvas, {box: 'device-pixel-border-box'});
}
var observer = new ResizeObserver(observerCallback);
document.requestAnimationFrame(rafCallback);
@dbaron that is very clever!
Yes that would work (@chrishtr btw :))
One typo about requestAnimationFrame though, new code below:
function observerCallback(entry, observer) {
let devicePixelBorderBoxRect = entry[0].devicePixelBorderBox;
render(devicePixelBorderBoxRect); // WebGL rendering
}
function rafCallback() {
observer.unobserve(myCanvas);
observer.observe(myCanvas, {box: 'device-pixel-border-box'});
requestAnimationFrame(rafCallback);
}
var observer = new ResizeObserver(observerCallback);
requestAnimationFrame(rafCallback);
function rafCallback() { observer.unobserve(myCanvas); observer.observe(myCanvas, {box: 'device-pixel-border-box'}); requestAnimationFrame(rafCallback); }
This feels like an abuse of the observer model, and I think also causes running garbage collection pressure from {box: 'device-pixel-border-box'}
each frame. (Wasm applications strive to run garbage free to avoid JS performance issues). :/
How do browsers like if rendering would always take place outside the rAF function?
Does the following achieve the same?
function observeCanvasSizeChange(canvas) {
function observerCallback(entry, observer) {
let devicePixelBorderBoxRect = entry[0].devicePixelBorderBox;
canvas.deviceWidth = devicePixelBorderBoxRect.width;
canvas.deviceHeight = devicePixelBorderBoxRect.height;
}
var observer = new ResizeObserver(observerCallback);
observer.observe(canvas, {box: 'device-pixel-border-box'});
}
function rafCallback() {
render(myCanvas.deviceWidth, myCanvas.deviceHeight);
requestAnimationFrame(rafCallback);
}
observeCanvasSizeChange(myCanvas);
requestAnimationFrame(rafCallback);
or does that also have the problem that rendering may occur to wrong(unsynchronized) size e.g. under continuous DOM size animation?
This feels like an abuse of the observer model,
Maybe, maybe not. The example script I gave, which fleshed out the idea @dbaron suggested, shows that there is a way to avoid the double-WebGL-render problem @jdashg raised in situations where a resize occurs. And if it turns out to be a very useful mode, we could formalize that in a new API.
Regarding GC: if it turns out to be a real problem for a WASM app of the future, it can definitely be solved pretty easily with an API tweak.
How do browsers like if rendering would always take place outside the rAF function?
There is nothing at all that forces developers to render within a rAF. In fact, many popular frameworks, such as React, don't actually render during rAF, in part to have more time to render than rAF allows, as rAF often tries to fire as late as it can to minimize latency to the screen.
As the Chromium rendering lead, I would say my view is that "do all rendering within rAF" is not a feasible solution for many cases, especially in complex apps, due to the issue of wasting CPU time being idle. (*)
or does that also have the problem that rendering may occur to wrong(unsynchronized) size e.g. under continuous DOM size animation?
It has several problems; one of them is the wrong-size issue you mentioned. There is also the issue of needing to poll (or constantly rAF, which amounts to the same thing) just in case sizes change, and also that extra forced layouts can occur because deviceWidth
would force layout, and other rAF callbacks may dirty layout also. These are mentioned in more detail in comments above in this issue.
(*) Aside that may be useful:
To address the framework use-case mentioned above, postAnimationCallback is a new callback being proposed that is intended to be run after rendering is complete, so that reading back layout is more likely to be free, and to support the use cases of starting rendering of the next frame as soon as possible.
However, this callback happens "post-commit", which means that the rendering display list has already been sent to the browser for display to the screen. Doing it post-commit is important to avoid postAnimationCallback ending up being the same as ResizeObserver, and thereby accidentally delaying frames.
Sorry to butt in here but I just wanted to point out the HTML snipped above is probably not the best practice for a full window canvas.
If you want a canvas to fill its container, in this case the <body>
then you should set the canvas to display: block
. Canvas defaults to display: inline
which ends up adding extra space at the end as is the reason people often get a scrollbar and then they end up hacking around their mistake by adding overflow hidden etc...
As well if you're using 100% and you don't set the size of the body and html you won't get 100% height. Example
These are 2 recommended ways to get a full page canvas
<html>
<style>
body { margin: 0; }
#canvas { width: 100vw; height: 100vh; display: block; }
</style>
<body>
<canvas id="canvas"></canvas>
</body>
</html>
This will give you a fullscreen canvas with no scrollbar on all desktop browsers.
The problem comes in on mobile where you have to decide what you want full page to mean. Both Chrome and Safarai (and Firefox?) decided that to deal with browsers on mobile hiding and showing the address bar and/or other UI then 100vh = the size when the UI is at its smallest and 100% = the size actually displayed (smaller when UI is visible, larger when not)
To get that to work cross browsers you have to do this
<html>
<style>
html, body { height: 100%; }
body { margin: 0; }
#canvas { width: 100%; height: 100%; display: block; }
</style>
<body>
<canvas id="canvas"></canvas>
</body>
</html>
Which is more correct depends on the use case. if the canvas is some background element over which content scrolls then you probably want 100vh. If the canvas is just supposed to fill the screen then you probably want 100%
The CSS Working Group just discussed ResizeObserver Device Pixel Border Box
.
My manual testing demo page: https://jdashg.github.io/misc/webgl/device-pixel-tester.html
device-pixel-border-box size
device-pixel-border-box size is Element's border-box size in device pixels. It is always an integer, as there are no fractional device pixels. It can currently be approximated by
Math.round(borderBoxSize * window.devicePixelRatio)
, but it cannot be computed exactly, because native code uses a different rounding algorithm.Use case
This unusual size request comes from Chrome's WebGL canvas team. It solves the long standing WebGL developers problem: "How to create HiDPI canvas without moire pattern?".
The existing "best practice" for creating a HiDPI canvas is to set size of canvas context to a multiple of canvas's css width/height. Example:
The webgl context will be HiDPI, one one canvas pixel should correspond to one device pixel. But, because ctx.width is pixel snapped, ctx.width can differ from "true" device pixel width. This difference can cause visible moire patterns when rendered.
Because of this, WebGL team believes that web platform needs to provide an API for true device pixel width.
Discussion
This size has several interesting differences from others reported by ResizeObserver:
Q: Does this size belong to ResizeObserver, or should we create a diferent DOM API for it?
I can't think of a clean API that would provide same functionality. Web developers must observe this size, and respond to its changes. ResizeObserver is the only size-observing API. Observing border-box size, and providing "devicePixelSize()" method will not work, because devicePixelSize could change without border-box changing.
Q: Should we observe device-pixel-size on all elements, or just canvas?
Observing device-pixel-size comes with performance cost, because size must be checked when Element's position changes. For all other sizes, we do not need to check size when position changes. Weak preference: Only allow device-pixel-size observation for canvas.
Q: Should we report device-pixel-size on all elements, or just canvas?
Weak preference: make it canvas-only, because other elements cannot observe this size.
Originally posted by @atotic in https://github.com/w3c/csswg-drafts/issues/3326#issuecomment-440041374