Closed bvssvni closed 5 years ago
It's not implemented. Would you mind sharing the use case?
It is because OpenGL supports it, and it seems to me restricting rendering to a specific rectangle seems like a useful thing sometimes. Some I can think of:
For all of these you would instead render to a texture and then use that, afaik.
On Sat, Oct 11, 2014 at 9:23 PM, Sven Nilsen notifications@github.com wrote:
It is because OpenGL supports it, and it seems to me restricting rendering to a specific rectangle seems like a useful thing sometimes. Some I can think of:
- Video players
- Browser embedding
- 3D editors
— Reply to this email directly or view it on GitHub https://github.com/gfx-rs/gfx-rs/issues/410#issuecomment-58770412.
Just because I'm stupid, what is 'viewport offset'?
@bjz The viewport transforms device coordinates into absolute screen coordinates (in pixels). There is a little reason to really use any other viewport than (0, 0, target_width, target_height)
.
@cmr That's bullshit.
@kvark Please justify your claims. How can you claim that there is little reason, for anyone, under any circumstance, to set the viewport?
It seems that people make up arbitrary arguments to disagree with me. They don't even make sense. Why do OpenGL support viewports then?
The way I see it, you add some extra fields, problem solved.
I too agree that the responses have been a tad dismissive. Perhaps we could just default to (0, 0, target_width, target_height)
, but allow the client to specify if they so desired. Could you give us an example of a possible API @bvssvni?
@bvssvni GL supported viewports before render to texture... I honestly don't know of any engine which uses viewport changes instead of render to texture. It lets you do things like postprocessing for something like a CCTV camera in-game etc.
I don't mean to be dismissive or arguing for the sake of arguing.
Note also that the CommandBuffer exposes set_viewport
already for this.
@cmr Thanks for the clarification.
@bvssvni What use case were you needing this for? Do the proposed solutions solve this for you?
@bvssvni I didn't disagree with you, I asked about your use-cases and marked the issue as a valid bug (see the tags). There was just a little need for them in my experience, typically for hacking something in (because of the bad architecture) rather than genuinely needing the non-standard viewport.
Keep in mind though that just because OpenGL exposes it, doesn't mean we should open it as well. For example, we don't have an FBO concept, it's redundant for the programmer needs. So, before adding support for the viewport, we need to justify the need for it (as in - have use cases in mind, where it would be the only or the best way to go).
The fact that the size is already a part of the Frame
should not be considered a half-way to the full viewport
. It is not what I'd like to see in the end. Instead, I'd like to have Frame
being immutable, and to derive the size from its render targets upon construction, thus making the size field virtually non-existent for the user.
Concluding, I don't mind adding support for the viewport at this point. Just add an offset to the Frame, initialize with zero, so that no existing code breaks.
Thanks people, you are awesome.
My concern is mostly keeping track of such edge cases. As long we note them down, we can put them in when we need them.
The Frame::new(with, height)
looks good to me, because that's what I do most of them time. However, in gfx_graphics there is a method G2D::render
that sets the viewport based on the information in Frame
. Since Frame
only provides the width and height, it might not behave as expected, depending on the actual meaning of the viewport. You want to get absolute coordinates for 2D graphics, this is relative to the offset in the viewport.
It seems we need it for clipping scroll areas in Conrod https://github.com/PistonDevelopers/conrod/issues/82 . Different libraries might be used for rendering for each widget, so we need a method that works across libraries.
@bvssvni viewport doesn't clip anything, it's just positioning and scaling. Maybe scissor is all you need for this issue?
@kvark Thanks! I thought it clipped. This might be the cause of misunderstanding in this thread.
@bvssvni no worries, Viewport is a very common point of misunderstanding. The device is clipping everything out of [-1,1] range after projection (and before/in the rasterizer). Then the viewport is applied, and then the scissor and window clipping occurs.
So, for example, if you set the viewport twice as small as the original, you'll still see the same window size being rendered, but the contents are gonna me scaled out by a factor of 2. Hope this example is clarifying anything :)
I want to display a rectangle (corresponding to a camera) of my 2D world without modifying its ratio. I don't want different window sizes to display more or less of that rectangle. I want a Fit Viewport. My camera provides a matrix to render this rectangle in normalized coordinates, independently of the screen. While I update the viewport to be of the correct ratio, of the maximum size possible fitting in the window and finally to be centered.
Is this a good use-case ? If not, how should I do this ?
@Bastacyclop it would be possible for you to use the viewport offset to achieve 2D camera scrolling, but I don't see why you'd not just move the camera itself. An orthographic projection has translation and scaling, use it instead of exploiting some rare API functions.
I think I might not have been clear. Here's what the code looks like:
// freely modified by the game (world units)
pub struct Camera {
pub position: Vec2<f32>,
pub size: Vec2<f32>
}
impl Camera {
// for opengl (transforms world position to normalized position)
pub fn compute_matrix(&self) -> [[f32; 4]; 4] {
let trans = -self.position;
let scale = Vec2::new(2. / self.size.x, 2. / self.size.y);
[
[ scale.x, 0. , 0. , trans.x ],
[ 0. , scale.y, 0. , trans.y ],
[ 0. , 0. , 1. , 0. ],
[ 0. , 0. , 0. , 1. ]
]
}
}
// called each frame to correctly display the scene
pub fn update_viewport(screen_size: Vec2<f32>, camera: &Camera) {
let fill_scale = screen_size / camera.size;
let scale = fill_scale.x.min(fill_scale.y);
let viewport_size = camera.size*scale;
let viewport_pos = (screen_size - viewport_size) / 2.;
set_viewport(viewport_pos, viewport_size);
}
So for a (40, 40) world units camera size and a (800, 600) pixels window size, I would set a viewport of position (100, 0) and size (600, 600).
@Bastacyclop in other words, you don't want the camera to depend on the screen size and you want to glue them together (by centering the image) with viewport offset. Would you consider this code instead:
pub fn compute_matrix(&self, aspect_ratio: f32) -> [[f32; 4]; 4] {
let trans = -self.position;
let scale = Vec2::new(aspect_ratio * 2. / self.size.x, 2. / self.size.y);
[
[ scale.x, 0. , 0. , trans.x ],
[ 0. , scale.y, 0. , trans.y ],
[ 0. , 0. , 1. , 0. ],
[ 0. , 0. , 0. , 1. ]
]
}
With this code, you'd not need to care about the viewport at all, since the camera output would always perfectly match the window.
I also don't want the screen ratio to modify the displayed world rectangle. With your code, a player with a (800, 600) window would see more of the world than a player with a (600, 600) window, right ? I want the (800, 600) window to display a (600, 600) scene with for example black borders on the sides. I believe this is exactly what the viewport is: a screen area in which the rendering is going to be done. But maybe I'm wrong.
@Bastacyclop you are correct, this code would allow you to see more with a wide screen. Strictly speaking though, the viewport just scales and translates your clipped normalized coordinates to the window/screen.
Would you consider setting up the scissor test to cut off the left/right sides with the aspect_ratio
modification I proposed earlier?
@kvark Yes, I would. But I would also like to know why it would be better than using the viewport.
@Bastacyclop because it's more universal? When dealing with a big engine, you'd want a few features that are powerful enough to cover most cases. So what I found in my practice is that the viewport is not exposed as a feature, since the size always matches the window, and the clipping is done universally with scissor anyway.
I do agree that your solution would be a bit more elegant, but there are no other benefits to it. Performance-wise scissor test costs nothing. In general practice, rendering to a square portion of the screen and cutting right/left borders doesn't sound good (even if it works for your case), since a lot of players would lose the screen real estate. So there is no strong reason to treat your case as a strong generic use case for adding the explicit viewport feature.
@kvark Thank you for the answers :)
I want to split my window into multiple draw areas and render from different cameras, and it seems that this ViewPort thing is the most straightforward method to achieve this purpose.
Scissoring implies that I draw on the entire screen and then cut off undesired pixels, which is not what I want.
Drawing to a texture then copy pixels to the framebuffer should do the trick (with a little overhead cost ?), like with glCopyPixels
, but it's not implemented either.
Viewport transform is applied early in the rendering pipeline (www.opengl.org/wiki/Rendering_Pipeline_Overview) and requires little configuration, thus solving my problem in a simple and efficient way.
Plus, it's not that "rare" :
Is it a legit reason to demand the viewport support ?
@tmielcza Thank you for coming in and doing some research on this!
First of all, it's important to understand what the viewport is. The viewport is not limiting the area you render to, it's only defining how the normalized coordinates are transformed to the window pixel coordinates by the rasterizer. So basically, it's just a transformation, and you can always compensate for it by changing your vertex transformation matrix. If you need to cut the are you draw to (and that is certainly required for your case, use the scissor rectangle).
Plus, it's not that "rare" :
I didn't say it's "rare" in the context of low-level API. Since there is a piece of fixed function hardware doing this (rasterizer), the low level APIs have to provide the access to it. But I believe it's not as widespread in the higher-level APIs built by the engines. Since gfx-rs stands in-between, it's not clear if we should expose it.
Drawing to a texture then copy pixels to the framebuffer should do the trick (with a little overhead cost ?), like with glCopyPixels, but it's not implemented either.
You could always run a shader that samples from a texture and writes to the output, but that's not supposed to be done just to mimic the viewport.
Let's put it this way - if there is a PR that introduces the viewport support as a PSO component, that will default to the current behavior (thus being backwards compatible) - I'll be happy to accept it. This will make more sense for multiple viewports/scissor support in the future (see "Viewport array" in https://www.opengl.org/wiki/Vertex_Post-Processing#Viewport_transform), but is certainly not required for a single viewport (you can adjust the vertex transform instead).
Thanks for theses clarifications.
If the Viewport is really just another transformation then I'll do it this way. But from what I've read, all the vertices that fall outside the Viewport will be clipped. Since your method just implies a transformation, those vertices will be rendered before scissoring, right ? So I wonder, if I split the screen into say 100 cameras, in a complex world, will it not kill the performances ? Or is it the same with a Viewport ?
Anyways, I'll try to implement it as a PSO component, as you said !
Let me clarify a bit more:
all the vertices that fall outside the Viewport will be clipped. Since your method just implies a transformation, those vertices will be rendered before scissoring, right ?
First off, vertices are never "clipped". Vertices can't be assumed to be in/out of the view volume (which is not the same as the viewport btw) until the vertex shader is run. So you aren't going to save any VS invocations.
Secondly, there are several ways for a pixel to be clipped before the pixel shader is invoked (which is the heavy part you'd want to save):
So you see that the scissor discards are very effective, they are only a slight heavier than the most optimal (guard bands) in a way that these pixels are still generated by the rasterizer. But rasterization is cheap, it's your pixel shader that mattes, and it will not be run.
Okay, I understand it better, thanks ! I thought these tests were performed later in the pipeline. Now it seems obvious they are not...
Indeed a Viewport is not necessary in this case, and in fact I cannot find a situation where it is really effective, unless it can alter the guard band limits. At least it extracts some transformation logic from the VS. I will handle it with transformation/scissoring for the time being.
Implementing Viewports for gfx seems fun though, and Viewports arrays might be an interesting feature. I will take a look at it.
Just to drop in a practical use case I'm running into right now, I rather want my game to get black/white bars when resizing the window rather than the stretching that's happening right now, as my game code can't seem to keep up with GFX's knowledge of the window size, and thus its viewport being set to full window size stretches everything rendered.
as my game code can't seem to keep up with GFX's knowledge of the window size
I don't understand, GFX should not have more knowledge of the window size than the rest of your code?
I don't understand, GFX should not have more knowledge of the window size than the rest of your code?
I get my window resize and re-create what's needed in GFX as soon as I need it, GFX just seems to stretch to the full window size regardless.
&Input::Resize(w, h) => {
let (new_color, _new_depth) =
gfx_window_glutin::new_views::<ColorFormat, DepthFormat>(&window.window);
renderer.color_view = new_color;
window_renderer.report_resize(Vector2::new(w, h));
},
@LaylConway Oh so you do have the window size knowledge. You can get the side bars by using the scissor test and modifying the camera output a bit, like described in earlier comments.
I can't modify the ortho matrix to fit a size I don't know about, it's already set to the size my rendering engine assumes the window should be according to what it gets told.
Sorry but I don't see why you would be willing to modify the viewport but not the matrix.
The problem is I can't adjust the matrix to correct for a window size I don't know about, the window size I receive from the window seems to lag behind the window size gfx knows about.
If I understand correctly, the window is being resized but the resize event is not sent to your application until the resizing is finished because of some platform specifics. thus you cannot react in the meantime. I'm not sure whether setting a viewport would solve that, maybe someone else knows. I would say that this stretching behavior is the responsibility of your window manager and not of the graphic context, but maybe I'm wrong.
I am receiving periodic resize events as the window's in the process of being resized, but the events seem to lag behind GFX and result in a stretched image. If I run this with my other backend instead (which always sets a viewport to the size it knows about internally) it will correctly do no stretching. The stretching isn't that bad with game graphics but it makes UIs and text completely illegible.
GFX OpenGL+glutin Backend During Resize
Other Backend During Resize
@LaylConway gfx (or more specifically, the window backend you use) doesn't have more information (magically) about your window size than you do. It's unfair to compare it with Vulkan(o) because the latter requires explicit swap chain recreation, so obviously it gets a different code path.
If you want to provide the source, we can take a look. So far it doesn't sound to me that exposing viewport offset has anything to do with your problem.
My comparison isn't a criticism or comparison of gfx specifically, I'm merely saying a certain thing is happening in one backend that isn't happening in another, I'll remove the specifics of it to avoid the comparison. The code's currently in a private repo I can't share fully but I'll give you a link later on IRC with a zipped folder with relevant files.
@LaylConway I'm not taking the comparison as criticism. I'm just explaining that having different resize behavior is sort of expected there.
The code's currently in a private repo I can't share fully but I'll give you a link later on IRC with a zipped folder with relevant files.
(as long as we can run it to reproduce) Thanks!
A recent pistoncore-input update has fixed issues that caused there to be a long delay between a resize happening and the resize event being received in game code. This seems to have caused my problem to become not reproducible.
Looks like this isn't going anywhere
I can't find any place to set viewport offset.