Open rib opened 2 years ago
Hm, I wonder what the problem with inner_size
right now? Since it does return the size in physical pixels (not logical ones, so all the scaling is accounted already). And that inner_size
is the inner size that is intended to be used by the renderer, so it doesn't have decorations, etc, etc.
This is also what is being used by downstream consumers.
An example where the conflation of inner and physical size is problematic is on Android where we want to let downstream APIs like Bevy know what size they should allocate render surfaces but we also want to let downstream users (e.g. also Bevy, or any UI toolkit such as egui) what the safe inner bounds are for rendering content that won't be obscured by things like camera notches or system
The inner size is the size you should be using for drawing. What you want is safe_area
method on the window, so the users will know the offset they should take into their buffer? Also, the safe area should be related to frame, etc.
The main problem is that "inner_size" currently has two different usages (physical and safe size), and yeah there's some potential for splitting the other way like you're suggesting (adding a new api for the safe area).
I'm not sure it's as clear cut as you suggest though...
For example on iOS the inner_size right now is the safe_area not the physical size.
It was also suggested in https://github.com/rust-windowing/winit/discussions/2235 that the inner size could represent the safe area on Android.
I think it's fair to say that intuitively, just based on the vocabulary used, 'inner size' could reasonably be expected to return the safe size, perhaps more so than the physical size. The vocabulary right now doesn't seem like a good match for reporting the physical size (and in fact it doesn't consistently report the physical size)
One other technical reason to consider splitting it the other way (e.g. add a physical_size API instead of adding a safe_size API) is that the existing inner_size
API already has an associated inner_position
which logically makes sense for a safe area, whereas a physical size doesn't logically have a position.
Hope that clarifies my thinking bit.
I agree that inner_size
has been conflated to mean two things, and we should change that.
I'm not sure the better name is "physical size", winit
already uses that in dpi::PhysicalSize
. Maybe "content size", "surface size" or "drawable size" to mimic what the underlying OSes call it?
What is expected of the set_inner_size
function? I would assume that's also modifying the "physical size", and should be renamed accordingly?
set_inner_size
should set the drawable surface size, the same applies for inner_size
.
Right, another name might help avoid confusion with the existing "PhysicalSize" APIs. Physical size was my first preference since I think that's what I'd want to call it if there was no conflict - and Bevy was at least one example that also seems to show they prefer the term "physical size" for this.
"surface size" might be a good alternative though, since I'm generally talking about the size that would be used to create a rendering surface, such as an EGL/GLX/WGPU surface.
"content" size could be confused with the "inner" size I think - i.e. that sounds like the area where the application will draw which might be smaller than the surface size if there's a safe area.
This is how I could see the terms begin defined:
Any API to change the outer or inner size is implicitly going to have to also resize the surface size, and the exact relationship between the surface size and inner/outer sizes may be backend specific.
I'm fine with that naming scheme, especially if we add a small section in the docs where we specify this terminology.
Just to note here, I did a fairly sweeping edit of the original issue, to refer to "surface size" instead of "physical size" since I think there was some agreement that would be better terminology (would hopefully avoid confusion with the PhysicalSize
type)
I think the inner_size
must return the actual size of the buffer you should create. So for android it should be the entire screen, including the safe area. The same API will be on the macOS for example, where they also have notches.
However we must add the safe_area
call to report the safe drawing area for the window describing the notches and such, or occluded area. This information could be as a hint to e.g. offset the window viewport or draw into the buffer with the offset.
Wayland could also get a protocol for that, given that such stuff is exposed in edid iirc, so compositors could deliver it.
How does it sound @rib?
I think if the 'inner' size effectively becomes the size for surfaces then it ends up as being a misleading / inappropriate name.
I believe "inner" in the current API name was originally intended to mean that it was the size of the window inside of the the frame. That's still a useful thing to be able to query but it varies across window systems whether that's related to the size of the surface.
I tried to distinguish these concepts in the table, but maybe the table is over complicated / unclear, I'm not sure.
If there's an API that would be specifically documented to provide the size that surfaces can be allocated at, why would you want to call it the "inner_size" ?
I'd suggest to rename outer_size
to window_size
, inner_size
into surface_size
. And add a safe_area
, what do you think?
I think it doesn't make sense to include Occluded
into this.
The "surface size" is unlikely to change often, but Occluded
will. If users are supposed to use "surface size" to draw, changes to it might warrant recreating buffers and re-configuring the surface, which is computationally expensive. This is how currently wgpu
users handle Resized
. Occluded
on the other hand is temporary and users most likely won't change much except not draw in the occluded area.
So I think these two concepts deserve to be differentiated.
Overall the proposal looks good to me.
Web is missing in the OP, which could be handled by env(safe-area-inset-x)
, but I'm not sure how I feel about that without https://github.com/rust-windowing/winit/issues/696.
@daxpedda the occlusion here was a bit confusing, it's mostly about the notches due to hardware (read macbooks or phone cameras). It's commonly called a safe area though (area where you can draw and your content won't be obscured by the hardware limitations).
I'd have to look though what usually such APIs expose.
Also, maybe we should call view_size
instead of inner size? Because surface_size
could be confused with vulkan/egl surfaces, while it's not them.
I like it!
My current preference would be: window_size
> view_size
> safe_size
.
I think it should be safe_area
since area could have gaps, I'm not sure how to even expose it yet, have to look into APIs we have.
If we can expose it I would also be in favor of calling it safe_area
instead of safe_size
.
I can say at least on Web it's not possible to get that information.
Also, maybe we should call view_size instead of inner size? Because surface_size could be confused with vulkan/egl surfaces, while it's not them.
Being clear that the size is the size that should be used for creating vulkan / egl surfaces was the reason why I think there should be a "surface_size" API - that would be the unambiguous purpose of the API, to know the size that should be used when creating GPU API surfaces. Referring to that as a "view" size doesn't really mean anything to me sorry - how would you define what a 'view' is?
If it weren't called "surface_size" I would probably want to simply refer to it as the "size" or "window_size" - i.e. the canonical size of the window which can be documented as the size that surfaces should be allocated at.
Is there a reason you want to stop exposing the 'inner' / 'outer' sizes as a way of exposing the geometry of window frames?
I don't have strong opinions about that since I don't really know when I would ever need to know the size of a window frame (except if they are client-side).
Insets could maybe be queryable with an enum, since there can be lots of different insets on mobile platforms.
E.g. for android see: https://developer.android.com/reference/android/view/WindowInsets#getInsets(int)
It's notable also that it can be desirable for applications to e.g. render underneath an onscreen navigation bar on android (which may be transparent) but they need to be aware that the nav bar won't be sensitive to application input (only for the nav buttons) and so the semantics of different insets are pretty important.
I think the default winit should allow users to draw into what physically is possible. And then they should use insets to offset their content, how does this sound, @rib ?
My only "issue" with surrface_size
is that request_surface_size
could mean that you resize the EGL surface, because on Wayland you manually resize it, and it could confuse some folks. View is usually a view inside the window, like NSView.
How folks on android usually handle all of that? Do they use glViewport
to offset based on insets and just glClear
to fill everything with the color they want? I'd really like for surface_size
to return the maximum of what you could use for drawing.
You almost certainly wouldn't want to glViewport and try and constrain a clear (GPUs usually have fast-clear optimizations you'd miss out on those by doing that). It should be fine to clear the full surface but adjust the layout of content to avoid things like navbars and notches.
As in the case of transparent navbars and notched status bars then it can make sense for an application to want to make the most of the screen and render right to the edges of the display - it's just that they need to avoid putting important things underneath notches or UI buttons underneath a navbar because they wouldn't be usable.
I mean, you clear without viewport, and then you offset the viewport to draw your content inside the safe area.
The notches and such must be avoided by the use of safe area APIs so it'll work cross platform, because lots of platforms has things like that. Even macOS has a transparent decorations were you can draw inside the buttons.
The default to recommend people would be to:
And if that's too complex, then only do drawing in the safe area.
Is this the reason why BorderlessFullscreen does not hide the menu bar on macOS?
Edit: It turns out I'm setting the option Automatically hide and show the menu bar
to Never
in my System Preferences. However, my use case (game) requires that the menu bar is hidden either way.
Edit: See https://github.com/rust-windowing/winit/issues/3880.
I've been banging my head against a problem with trying to control pixels and I just realized this was the issue. So I'm writing my solution here in case anyone runs into this or googles for a solution.
Here's a square rendered with a debug texture sampled in the shader using textureLoad
:
The issue is subtle but the horizontal and diagonal lines show that something is off
If my window is set to have size 800x600 and is set to be decorated,
event_loop
.create_window(
Window::default_attributes()
.with_inner_size(PhysicalSize::new(800, 600))
.with_decorations(true))
the real outer size is 800x600 but reported as 800x635 and inner size is actually 800x565 but reported as 800x600.
dbg!(window.inner_size());
dbg!(window.outer_size());
[src/main.rs:186:30] window.inner_size() = PhysicalSize {
width: 800,
height: 600,
}
[src/main.rs:187:30] window.outer_size() = PhysicalSize {
width: 800,
height: 635,
}
let (actual_surface_width, actual_surface_height) = {
let inner_size = window.inner_size();
let outer_size = window.outer_size();
(
2 * inner_size.width - outer_size.width,
2 * inner_size.height - outer_size.height,
)
};
If I use actual_surface_width
and actual_surface_height
to configure my surfaces, I get correct results:
@bjornkihlberg what backend and winit
version is this on? That seems incorrect regardless of how inner and outer size are interpreted (i.e. the entire thing is 600
pixels high, so it shouldn't ever return 635
).
I bet that was wayland and what was reported is correct. On Wayland you're the one controlling the sizes all the way, winit tells you mostly about the suggestions from compositor and both sizes are correct.
If you end up with smaller visually sizes, it's on you.
@kchibisov to then ask the question that I wanted to ask: is there some math that adds the "expected" title bar size to the surface size? But in the end we use CSD to render it within the surface? The screenshot has a window with 600
pixels in height so I don't see where this arbitrary outer_size().height = 635
could possibly come from.
Or is that actual wayland
window size (if that's even a thing) separate from committed buffer size?
@MarijnS95 we add them ourselves because outer size is the size of the window including the decorations.
@bjornkihlberg what backend and
winit
version is this on? That seems incorrect regardless of how inner and outer size are interpreted (i.e. the entire thing is600
pixels high, so it shouldn't ever return635
).
It was on Wayland.
@MarijnS95 we add them ourselves because outer size is the size of the window including the decorations.
@kchibisov then who rendered the decorations _within inner_size()
_ in the sceenshots above, is that winit
s CSD feature?
Decorations are not within the inner_size
, they are added to outer_size
since they resize on a separate subsurface, not visible to the user.
@bjornkihlberg could you provide a WAYLAND_DEBUG=1
log during reproducing your issue? If the window is smaller, it's likely your buffer is smaller. Winit has no idea about the buffer of the window, so it's expected if they've got desynced.
Decorations are not within the
inner_size
, they are added toouter_size
since they resize on a separate subsurface, not visible to the user.
It looks like we are misunderstanding each other. I pointed out the same, because that is not what seems to be happening in https://github.com/rust-windowing/winit/issues/2308#issuecomment-2451563327, and said that this must be some kind of a bug. So we agree :)
and said that this must be some kind of a bug. So we agree :)
I mean, they've asked for 800x600
, they've got 800x600
, and given that it's wayland, it can not be anything other than 800x600
. So if they think that it's 800x565
it's not true, since they control the size entirely. In anyway, the WAYLAND_DEBUG=1
log will pretty much tell everything.
@kchibisov sure, let's wait for WAYLAND_DEBUG=1
. What I'm saying is that yes, they requested 800x600 for inner_size
and got that. However, decorations are being rendered within those 800x600: who is responsible for that?
At least it makes sense that if CSD is never rendered to the subsurfaces around the window, wayland
won't show them and the whole window will always appear to be inner_size()
and never outer_size()
until those buffers are submitted.
@kchibisov sure, let's wait for WAYLAND_DEBUG=1. What I'm saying is that yes, they requested 800x600 for inner_size and got that. However, decorations are being rendered within those 800x600: who is responsible for that?
I completely don't understand you. They requested 800x600 for inner size, got 800x635 for outer size, an the decorations are in the (0,-35) location. I have no idea where they get other sizes from, and we put decorations at the (0, -35) to the surface origin, so in the negative coordinate space. If the compositor is broken, it could maybe move them into the surface for whatever reason.
@kchibisov download the first screenshot and open it in your favorite image editor. Count or measure the pixels.
The total window size _including decorations is 802x601
. The inner size of the white surface is 800x565
.
That doesn't match the returned inner_size() => (800, 600)
and outer_size() => (800, 635)
does it. This is the issue that @bjornkihlberg pointed out, it's not "if they think that it's 800x565 it's not true" :slightly_smiling_face:
Also it looks like GitHub is having a bug with timezones. I just posted https://github.com/rust-windowing/winit/issues/2308#issuecomment-2453351424 in reply to https://github.com/rust-windowing/winit/issues/2308#issuecomment-2453347703 (which was in reply to https://github.com/rust-windowing/winit/issues/2308#issuecomment-2453340256):
Probably because the servers are in the US, whose time went back one hour from CDT to CST, 10 minutes ago?
@kchibisov download the first screenshot and open it in your favorite image editor. Count or measure the pixels.
size is entirely controlled by the user as I said, whatever they draw whatever is displayed, if they draw less, it's their issue, not winit one. inner_size
just tells you what to use.
Edited: to use "surface size" instead of "physical size" as discussed. The edit also tries to clarify the table information.
Currently the general, go-to API for querying the size of a winit window is
.inner_size()
which has conflicting requirements due to some downstream consumers wanting to know the size to create render surfaces and other downstream consumers wanting to know the safe, inner bounds of content (which could exist within a potentially larger surface).For lower-level rendering needs, such as for supporting integration with wgpu and or directly with opengl / vulkan etc then what downstream wants to know is the size of the surface they should create, which effectively determines how much memory to allocate for a render target and the width and height of that render target.
_Incidentally, 'physical size' is how the Bevy engine refers to the size it tracks, based on reading the winit
.inner_size()
which is a good example where it's quite clear what semantics they are primarily looking for (since they will pass the size to wgpu to configure render surfaces). In this case Bevy is not conceptually interested in knowing about insets for things like frame decorations or mobile OS safe areas._Conceptually the
inner_size()
is 'inner' with respect to the 'outer' size, and theouter_size()
is primarily applicable to desktop window systems which may have window frames that extend outside the content area of the applications window, and may also be larger than the surface size that's used by the application for rendering. For example on X11 the inner size will technically relate to a separate smaller child window that's parented by the window manager onto a larger frame window.Incidentally on Wayland which was designed to try and encourage client-side window decorations and also sandbox clients the core protocol doesn't let you introspect frame details or an outer screen position.
Here's a matrix to try and summarize the varying semantics for the inner/outer and surface sizes across window systems to help show why I think it would be useful to expose the physical size explicitly, to decouple it from the inner_size:
.set_inner_size()
followed by.inner_size()
can return a pending size) but the 'surface' size would be the remote, server-side sizeAn example where the conflation of inner and surface size is problematic is on Android where we want to let downstream APIs like Bevy know what size they should allocate render surfaces but we also want to let downstream users (e.g. also Bevy, or any UI toolkit such as egui) know what the safe inner bounds are for rendering content that won't be obscured by things like camera notches or system toolbars. (ref: https://github.com/rust-windowing/winit/discussions/2235)