Open Mantissa-23 opened 4 years ago
This is similar to what I've been thinking about at https://github.com/godotengine/godot-proposals/issues/932! Your scenario (FPS viewmodels) is way less obscure than mine though, and shows IMO that some sort of control over object render order and depth buffer manipulation is needed in the 4.0 pipeline configurability plans. (Also, using different FoV, clip planes, and transforms per object grouping. Using multiple Cameras does seem like a reasonable way to represent this information, to me.)
Also, here is an old request from 2016 for "always on top" FPS viewmodels, that may show long-term interest (although that issue itself hasn't had much activity): https://github.com/godotengine/godot/issues/6205.
I guess this is also related to #355 in some way (panoramic skybox)
I second(or third) this proposal. It would allow me to make some ground up hyperspace visuals(Ever smoke DMT?) I could also then use this procedurally as well.
I tried my hand at this and got a proof of concept working on 3.2, with bugs. I opened https://github.com/godotengine/godot-proposals/issues/1428 with more details. If someone wants to try it out, work on it, take it over, etc., would be appreciated. Opened it separately because it's a particular way of implementing this and don't want to sidetrack this thread. (But I do want to let anyone subscribing know, if you're interested. 😄)
Stumbled upon this proposal while searching for an answer to the fps viewmodel separate camera question. Here's a way I've managed to make it work in my project.
2 viewports overlayed on top of each other, the second viewport has Transparent Bg set to true. Cameras are using separate cull layers (viewmodel uses the same cull layer as the viewmodel camera)
A video of the effect in action https://youtu.be/RLM0oX2eptg
I believe it is fairly easy to setup and could be hopefully of use to you :)
2 viewports
In this proposal, the concern with that approach is that it may be inefficient. I use this too as a workaround for now. 🙂
The roadblock for me is that this doesn't work in VR. You probably wouldn't want a viewmodel in VR, but here are other effects that would be solved by this proposal.
If you render only what's in the mask of the viewmodel camera, why would it be not performant? You don't double render objects. There should be some performance test done.
Secondary camera with fixed FOV for consistent viewmodel appearance despite changes to primary camera FOV
This can be simulated fairly well by using non-uniform scaling on the viewmodel. Stretch it to make the FOV look higher, and squash it to make the FOV look lower. If you run into mesh normal issues in 3.x
, enable Ensure Correct Normals in the viewmodel's SpatialMaterial.
As for the general "avoid poking into walls" issue, you can scale down the viewmodel and bring it closer to the camera (which means it'll look almost identical). Adding a RayCast-based animation that plays when the player faces a wall is also an option.
Is there any update on this? I'm trying to implement this approach https://youtu.be/LAopDQDCwak in Godot - but the impression I get from this issue and other related ones is that it's simply not possible
Is there any update on this? I'm trying to implement this approach https://youtu.be/LAopDQDCwak in Godot - but the impression I get from this issue and other related ones is that it's simply not possible
You can have multiple viewports. This proposal mentions that it may not be the most efficient way, but it is still a way.
Is there any update on this? I'm trying to implement this approach https://youtu.be/LAopDQDCwak in Godot - but the impression I get from this issue and other related ones is that it's simply not possible
You can have multiple viewports. This proposal mentions that it may not be the most efficient way, but it is still a way.
Yeah I managed to get it working via two different viewports, one of them with transparent_bg enabled, each one with a camera with a different cull mask, both represented on two overlapped TextureRects as ViewportTextures.
The problem I'm encountering right now is that the transparent_bg'd one isn't affected by the environment of its camera. I'll try to look for / open an issue about it in the next days
The problem I'm encountering right now is that the transparent_bg'd one isn't affected by the environment of its camera. I'll try to look for / open an issue about it in the next days
Has this had any traction since May? It's certainly unusual there's no alternative to using either an FOV/forced depth shader or shrinking the FPS weapon models down to 0.01. I think multiple cameras on the same viewport is probably the most Godot-friendly way of doing this.
Has this had any traction since May? It's certainly unusual there's no alternative to using either an FOV/forced depth shader or shrinking the FPS weapon models down to 0.01. I think multiple cameras on the same viewport is probably the most Godot-friendly way of doing this.
As far as I know, nobody made any progress on implementing this (or even started looking into it). It's not a planned feature for 4.0.
Yeah for now I'm using @Yazir's solution, which is currently the only way of achieving this effect in Godot.
I have no idea if it would be more or less efficient to do this using multiple cameras per viewport. Currently I have 3 of them, one for rendering a viewmodel, one for rendering a 3D HUD, and then one for rendering the actual game itself.
If it isn't necessarily more efficient to render multiple cameras to a single viewport (i.e. multiple viewports have performance on the same order as one with multiple cameras), then I'd argue that this proposal isn't relevant any longer and could be closed, as long as @31's issue with this particular solution not working in VR is also resolved in some way.
If someone with more knowledge of the rendering pipeline could key in here, we might be able to close this.
Yeah for now I'm using @Yazir's solution, which is currently the only way of achieving this effect in Godot.
I have no idea if it would be more or less efficient to do this using multiple cameras per viewport. Currently I have 3 of them, one for rendering a viewmodel, one for rendering a 3D HUD, and then one for rendering the actual game itself.
If it isn't necessarily more efficient to render multiple cameras to a single viewport (i.e. multiple viewports have performance on the same order as one with multiple cameras), then I'd argue that this proposal isn't relevant any longer and could be closed, as long as @31's issue with this particular solution not working in VR is also resolved in some way.
If someone with more knowledge of the rendering pipeline could key in here, we might be able to close this.
It's awkward though using multiple viewports compared to just adding another camera, and I seem to recall lighting didn't match using a second viewport.
@elvisish It is awkward, true, but as I recall, lighting worked properly, as seen in the video in my comment above from 2020. Afair it can be turned off and on, depending on light masks.
This really should be possible in Godot 4.0, at the very least planned for 4.1 as it's incredibly cumbersome to do without being able to just show certain objects on certain cameras. Maybe even a 3D Objects Layer node would be better? Is there a way of using 4.0's SubViewport to do this easily?
@elvisish 4.0 is already out and 4.1 will be out soon (feature freeze in 2 weeks), so there isn't really time to do this in 4.1.
@elvisish 4.0 is already out and 4.1 will be out soon (feature freeze in 2 weeks), so there isn't really time to do this in 4.1.
Ahh I didn't realize it was this close, maybe 4.2?
While this is a heavily desired feature, we still don't have a clear idea of how to implement this, and nobody has really looked into it so far. There are many difficult problems to resolve, such as handling GI interactions when using multiple cameras.
You generally expect the weapon model to be able to receive GI from the scene after all, and this is made more difficult when the second camera uses a different FOV. This already tends to be a problem when using multiple viewports in general, so multiple cameras in the same viewport will make it easier to reach problematic situations like this one.
Coincidentially I faced a related issue today, so I think it is worth to add my two cents:
A colleague is currently working on some atmospheric effects, following this video: https://www.youtube.com/watch?v=dzcFB_9xHtg , all of this uses custom shaders for the atmosphere.
After following the video (and using the project author's code), we are using multiple camera+viewport. The first one is for drawing the stars in the background, which is convenient to use some reduced units (like, not hundreds of thousands of kilometers away from the camera. Then we use another one for the planet, which draws on top with alpha blending. And in the third one we draw the atmosphere.
The linked project actually draws the stars last, at first we didn't understand why, but after trying to move them to the back of the stack, we found out why. The author blends the atmosphere with the globe by grabbing the globe camera render target and using it as a parameter for the atmosphere shader that renders in the atmosphere camera+viewport. when he does this, he is not able to blend the stars together because they were in a different viewport to begin with, so to adjust for that, he would have needed to change the atmosphere shader and make it more complex
Something we thought on doing, but couldn't figure a way to do, is to set the viewport blend mode as additive, this could have helped us but I think it still wouldn't have been the right solution. If we did it that way, post-processing effects would apply separately and then the rest of the image processing would be incorrect.
It makes sense to compile the effect of many cameras into a single viewport because then all draw operations would happen in the same buffer and tonemapping would occur once. For now we are moving everything to a single camera and reprojecting the stars meshes in the shader. We're hoping that doing it this way would blend out the stars naturally when you get inside the atmosphere as the atmosphere brightness would affect the exposure, and make the stars invisible. We could always add many post-process steps later, as long as the pre-tonemapped result is accurate.
Something we thought on doing, but couldn't figure a way to do, is to set the viewport blend mode as additive, this could have helped us but I think it still wouldn't have been the right solution. If we did it that way, post-processing effects would apply separately and then the rest of the image processing would be incorrect.
If you display the viewport in a ViewportContainer, you can use a CanvasItemMaterial on the ViewportContainer node with the material's blend mode set to Add. Alternatively, you can use the ViewportTexture in a TextureRect node and set a CanvasItemMaterial on the TextureRect node.
as long as @31's issue with this particular solution not working in VR is also resolved in some way.
I believe the VR integration could be independently improved such that multiple viewports with per-eye rendering would work, but (with no real knowledge of the plans of everyone involved!) I imagine it would be much less likely for that to be implemented given the relatively small number of VR headsets vs. tagging along with a more broadly requested new Godot feature that applies to many FPS games. 😄
I did briefly look at improving the VR handling, but there are many intertwined parts so I gave up and worked on the implementation I posted in https://github.com/godotengine/godot-proposals/issues/1428 instead.
Describe the project you are working on: A first-person shooter, making use of first-person viewmodels that are separate from third-person/worldspace models.
Describe the problem or limitation you are having in your project: Godot currently only supports rendering one camera per viewport. This makes the following use-cases difficult to implement:
Describe the feature / enhancement and how it helps to overcome the problem or limitation: Allowing multiple cameras (with different render masks) to render to the same viewport would overcome this problem. For each use case:
This is a feature that is present in both Unreal and Unity, and is used for similar reasons.
Describe how your proposal will work, with code, pseudocode, mockups, and/or diagrams: Cameras will have a
depth
field, which determines the order in which they render; pixels that reach the background of one camera (i.e. are not rendered) at a higher depth will be rendered by the next lowest camera.I don't have an understanding of Godot's rendering pipeline, and understand it is in flux for Godot 4.0. I have C++ experience, and with some pointers I'd be willing to implement this myself for 3.2.2 or 4.0.
If this enhancement will not be used often, can it be worked around with a few lines of script?: I believe these can currently be implemented using a screen-space ViewportTexture and Viewport nodes, however, I haven't tested this, and I imagine the performance would be worse than rendering to the same, default Viewport from multiple cameras. If the performance would be similar, go ahead and close this issue.
That being said, I do believe using ViewportTextures is still less intuitive than having multiple cameras rendering to a single Viewport.
Is there a reason why this should be core and not an add-on in the asset library?: This requires a change to the Camera node as well as how it interfaces with the rendering pipeline. I also cannot imagine it being a large change, either code-wise or size-wise.