asny / three-d

2D/3D renderer - makes it simple to draw stuff across platforms (including web)
MIT License
1.29k stars 109 forks source link

`Camera` is trying to do too much - not compatible with custom camera implementations #428

Open marstaik opened 9 months ago

marstaik commented 9 months ago

Hi - I'm been using three-d for some projects and its quite nice, but I have encountered some issues with the 'Camera' struct.

I personally use nalgebra for math and convert everything to cgmath before it's sent to the renderer for use with this library, but so far this camera class is proving to be a pain point.

Currently, if you want to implement your own camera controller, or even use your own camera code in general, you cannot do so easily. You would have to decompose your own internal camera back into z_near, z_far, position, target, and up vectors, and "hope" that the matrices are constructed the same way.

Some information/brain dumping:

IE, the current Camera class is less of a camera and more of a CameraBuilder. Its doing the "model" computation from a position and target, and the "view matrix" computation. In reality, it is a false statement that a camera's view matrix is the inverse of model matrix. That only holds if your camera's model matrix is defined as looking down -Z with +X being the right vector and +Y being up (opengl space) to begin with.

The camera class holds tone_mapping and color_mapping information, which honestly has nothing to do with a 3D camera. I believe that should be attached somewhere else in the render pipeline. Even in reality cameras output raw, and then tone mapping is applied to images, for the most part.

In reality, the Camera class only needs a view matrix and a projection matrix. The projection matrix is built from an aspect ratio, not the viewport dimensions. The camera frustum can be computed from those two. The Viewport is only needed for UV/NDC coordinate computation, which really isn't related to a camera and maybe should be put somewhere else.

It would be sufficient to be able to just set the view and projection matrices used.

As a side note, I personally use right-hand coordinate systems for rendering, which aligns with this library, but others that use left-hand would be a bit SOL. The renderer could be configured to work with either, but it would be more work.

I see a few approaches:

cbpudding commented 8 months ago

I second this. Having target coordinates can be useful, however it is not always desired. Maybe having a function that takes a position and target coordinates to calculate the rotation parameter would be preferable.

alecrivers commented 7 months ago

I'm hitting this issue in my own project too, because I'm computing a camera that exactly frames a given set of geometry, and then having to generate eye, target, and up vectors just to get three-d to construct its own copy of the same camera. A pretty minor issue, though.

asny commented 2 weeks ago

Oh. I was absolutely sure I had already replied. Sorry for the really late reply 🤦

That is a lot of grievances with the Camera struct 😆

I personally use nalgebra for math and convert everything to cgmath before it's sent to the renderer for use with this library, but so far this camera class is proving to be a pain point.

There's 4-5 "standard" math libraries, converting between them is a bit of a pain, I know. However, you can hardly blame three-d for that.

camera's model matrix is defined as looking down -Z with +X being the right vector and +Y being up (opengl space)

This is an OpenGL/WebGL renderer, so yeah, I assume an OpenGL camera space.

The camera class holds tone_mapping and color_mapping information, which honestly has nothing to do with a 3D camera. I believe that should be attached somewhere else in the render pipeline.

Well, a physical camera can also apply some color and tone mapping, so I think it's not totally far off to put that information into the camera. The alternative is to have another struct that just holds that information. Many engines put it into a "renderer" object, but I think that makes much less sense and it also adds another entity that you have to pass around for not a lot of benefit.

The Viewport is only needed for UV/NDC coordinate computation, which really isn't related to a camera and maybe should be put somewhere else.

So where do you propose to put the viewport exactly? Again, I think it makes most sense in the camera, unless you also pass that explicitly into each render call which would add extra complexity IMO.

As a side note, I personally use right-hand coordinate systems for rendering, which aligns with this library, but others that use left-hand would be a bit SOL. The renderer could be configured to work with either, but it would be more work.

Yeah, that is a lot of work. Please remember that this is something I do for fun and for free.

Camera becomes a trait that only needs a view() -> Mat4 and projection() -> Mat4, so we can render with any arbitrary camera that fulfills these requirements. The problem with this is that you probably need some internal cached data like the frustum for culling.

The camera could become a trait, I didn't do that because I thought it was not a super important feature to be able to create your own camera struct. I could be convinced to do that, but it also need to return the viewport, tone mapping and color mapping or those entities need another place to live.

Camera is stripped and turns into the bare minimum, such as view and projection. This way the internal frustum code can remain as it's probably needed by the renderer. The current Camera code and all of its screen casting, pitch, yaw, rotation, etc code could be put into a CameraHelper or CameraBuilder class of some sorts.

I'm still unsure why that would solve anything, you still have to create this stripped down camera before each render call if you are converting from nalgebra. Also, you have to put the viewport, tone mapping and color mapping somewhere else, so where should they go? Finally, I don't know why you want to move the camera functionality, you can just avoid using it if you don't need it.

There's a third option; composition. You could encapsulate the three-d Camera struct in your own struct and make an interface that suits your needs, probably taking nalgebra structs as input. Internally in your own struct you update a three-d Camera and when rendering, you get a reference to that camera.

asny commented 2 weeks ago

I second this. Having target coordinates can be useful, however it is not always desired. Maybe having a function that takes a position and target coordinates to calculate the rotation parameter would be preferable.

I use the target internally, so I guess it's the other way around, you need a function for calculating the target from a rotation? That could be added, but I guess if you have a rotation matrix, it's just a matter of applying that rotation to a vector and then you have a target.

asny commented 2 weeks ago

I'm hitting this issue in my own project too, because I'm computing a camera that exactly frames a given set of geometry, and then having to generate eye, target, and up vectors just to get three-d to construct its own copy of the same camera. A pretty minor issue, though.

Makes sense. In this case, I guess a trait would be good, so you can just implement that trait for your own camera and return the already computed view and projection matrices.