PistonDevelopers / vecmath

A simple and type agnostic Rust library for vector math designed for reexporting
MIT License
78 stars 14 forks source link

Settle on a single matrix representation #270

Closed kvark closed 7 years ago

kvark commented 7 years ago

Having the code duplication to support both row major and column major is unsound. I suggest settling on one convention and going with it. There is always transpose for those in the other camp.

bvssvni commented 7 years ago

I think the terminology "unsound" is associated with an invalid logical argument, which is not the case here, since it is a matter of library design.

bvssvni commented 7 years ago

Standard mathematics use row major, while OpenGL uses column major. The problem is that neither convention is better than the other, and you have to trade performance with settling on a convention.

The way this library is designed is to let people reexport functions in libraries which picks one convention or the other.

kvark commented 7 years ago

I think the terminology "unsound" is associated with an invalid logical argument, which is not the case here, since it is a matter of library design.

Right, thanks for correcting me!

The problem is that neither convention is better than the other, and you have to trade performance with settling on a convention.

Right, but given that the performance cost of transposing a matrix is really tiny (comparing to any other matrix operations, or GL calls to actually upload the thing), I'd raise a question if the trade-off of removing one of the representations would be worth it for the sake of code simplicity and maintainability.

bvssvni commented 7 years ago

We don't have any benchmarks for this yet, so it would be like trading against an item you still not know the value of (at least on my part).

The trade-off depends on the performance cost and subjective claims of simplicity. It is common that people perceive the same design as "obviously right" or "obviously wrong". I guess, by exaggerating of course, you are in the "obviously wrong" camp, and I am "obviously right" camp.

Perhaps we should use probability to measure the trade-off? We both know that it's not a right/wrong answer here, and due to lack of data we both wonder how to measure the trade-off more accurately, so we are both willing to change position when confronted with new evidence. I guess on the "obviously right" scale you are around the 20% mark, and I am around the 80% mark. I expect to be surprised by how little the performance cost turned out to be to move down to 50% or lower. I guess 1/10 of multiplication? It might depend heavily on auto-vectorization in LLVM. I appreciate the zero-cost in performance, though.

However, my overall satisfaction with the current design as a math abstraction is only around 60%, because it was not really intended to be convenient math abstraction. I usually wrote those on top and my problem is that for different use cases I come up with different designs. So, I don't really expect that we can find a math abstraction that is convenient for all use cases. What I like about the current design is the ability to reuse the code for a better math abstraction. I believe it is worth the extra compile times and that Rust eliminates dead code from the final binary. Costs to maintenance are low because this library has been pretty stable. The design emerged from the need to have a stable library that did the job, because all the other libraries were breaking all the time.

If we pick a convention, then I guess column major is the most reasonable alternative, since the library is also intended to be used APIs like OpenGL and rendering is a more likely bottleneck.

Oh, I think perhaps we're talking about two different performance costs. One you mentioned is when you use with with GL calls. The one I meant is when you are doing stuff on the CPU. I use the row major sometimes.

Thoughts?

bvssvni commented 7 years ago

Also, one reason this library supports row major is because I'm used to it when working on 2D graphics. If I was forced to switch to column major I would probably not like it. 😄

kvark commented 7 years ago

Oh, I think perhaps we're talking about two different performance costs. One you mentioned is when you use with with GL calls. The one I meant is when you are doing stuff on the CPU. I use the row major sometimes.

Correct. I didn't consider the cost of transformations with different representations. It's a good point! I mostly see vecmath as an interoperability standard, so I expect people to do actual (heavy) computations in cgmath/euclid/nalgebra/whatever else. In that context, only the laziest/simplest use cases would actually keep using vecmath internally, so for those the perf difference may not matter that much.

Btw, why is row major better for 2D graphics?

bvssvni commented 7 years ago

Row major is mathematical standard, and you can separate the row vectors to transform the coordinates independently. This is an advantage if you are doing transformation with f64 precision and the result is cast to f32. You also only need 2 vectors that fit perfectly with an affine transform in row major, which requires 3 vectors in column major.

Just like the sRGB color standard that came from the use of CRT monitors, OpenGL chose column major, perhaps of historic and performance reasons. I believe I read something about it, but can't remember where.

I think you are using a bit misleading terminology around "actual (heavy) computations", because I don't see how you ground the meaning of those terms. I always to try avoid such words because the choice of a math abstraction is very different from performance optimization. Mike Acton mentioned people in the game industry often use tools to pre-multiply matrices (I belive it was this video). I wrote a such tool myself a few years back, but it was for reducing math expressions, not only performance optimization. I've only worked on 2D animation software, not 3D games, so I don't know if their use case was the same as mine.

A simple math library is often convenient when performance really matters, because you don't have to hunt down the implementation behind the math abstraction. When I advocated this approach to other people, somebody pointed out that using SIMD vectors could beat simplification of math expressions when comparing benchmarks, but it did not matter as much as expected (400x) because LLVM does auto-vectorization, so the benefit on an O(N^3) algorithm was only 20%, perhaps caused by alignment issues. SIMD requires memory alignment, so LLVM might have missed an opportunity on the disputed algorithm. I started an experiment on this here. I guess the path forward for optimization in Piston is writing benchmarks, compare with SIMD in specific cases, which does not require much math abstractions. The Vecmath library is a good placeholder for this case.

Btw, does any of cgmath/euclid/nalgebra have a GPU backend? I also read that OpenCL will be merged into Vulkan, so the future might mean more integrated toolbox for both rendering and physics. I would expect that heavy computations are made on the GPU, not with any Rust math abstraction for the CPU, which are mostly for flexibility. I talked to a scientist working on climate modelling, and got the impression they did FORTRAN stuff that they spent man-millenia optimizing. When you said "actual (heavy) computations", I was thinking of FORTRAN. 😄

bvssvni commented 7 years ago

I can boil down the intention of this library to these two points:

  1. It addresses ecosystem integration where both row and column major is desired
  2. Less abstraction is more beneficial in this case

Aside from that, I am used to work with this library, so I often prefer reexporting/writing new abstraction on top, e.g. for stream processing. In my particular use cases I am OK with the current design. I know other people use other libraries, but it does not conflict with the goals of Piston since it makes a reasonable trade-off by using Vecmath.

I believe I have analyzed this design to the death by now, multiple times, so I am not very convinced that there is big benefit of making changes.

In the upcoming "Advanced Piston" tutorial there is a section about how to use vecmath. Link https://github.com/PistonDevelopers/Piston-Tutorials/blob/master/advanced/writing-idiomatic-code.md#organizing-math-modules-in-piston-libraries

kvark commented 7 years ago

actual (heavy) computations

I think this is fairly obvious? Cross products, normalization, matrix multiplication, and, of course, inversion, to name a few.

Current math libraries in Rust are purely CPU. In that sense, it sounds like you are in agreement that chasing the last 5% of performance for vecmath may not worth it for the complexity cost.

How about we settle on row-major for everything?

bvssvni commented 7 years ago

Cross products, normalization, matrix multiplication, and inversion are basic math operations and supported by Vecmath, but not much more. I think you mean common operations for CG such as construction of rotation and projection matrices, conversion to and from quaternions, ray collison etc. Of course people would use a math abstraction for this purpose, but Vecmath is only intended for basic math operations.

One use case for column major is debugging or prototyping of code that is translated or intended to be ported on the GPU later. For example, when doing 3D you can write the code in a 4x4 column major matrix, so you can easily port it to an OpenGL shader to optimize performance.

bvssvni commented 7 years ago

I'm not sure what you mean about the 5% performance difference. I think you meant my guess of 1/10 performance improvement and a 5% penalty would not hurt Vecmath. However, the full story is not just a matter of the performance in this library. In practice, e.g. Piston-Graphics, a backend can override triangulation of shapes, so you can use an platform specific API to get better performance. The performance of Vecmath matters, but there is not enough knowledge in the backend-agnostic API of libraries like Piston-Graphics to make assumptions of supported hardware (at least not before Vulkan gets universally supported).

kvark commented 7 years ago

Cross products, normalization, matrix multiplication, and inversion are basic math operations and supported by Vecmath, but not much more. I think you mean common operations for CG such as construction of rotation and projection matrices, conversion to and from quaternions, ray collison etc. Of course people would use a math abstraction for this purpose, but Vecmath is only intended for basic math operations.

My point was that vecmath is humble in terms of features, and it's reasonable to assume that users with non-trivial computations (e.g. construction of rotation/projection and others you mentioned) would be using other libraries internally. Thus, I'd see vecmath focusing on simplicity and interoperability rather than the ultimate performance (whether we guess it be 5% or 1/10 doesn't matter much).

kvark commented 7 years ago

I.e. advertise it not as "generic math library for all your needs" but rather

Here is a standard representation of math primitives with built-in Rust types. You can use it for interoperability. And oh, hey, it supports some basic operations too!

kvark commented 7 years ago

After a lengthy discussion on #rust-gamedev, I concluded that vecmath doesn't need to change.

bvssvni commented 7 years ago

I'm reading through the discussion from the logs now. I didn't realize you were thinking of making it easier to integrate the larger ecosystem of Rust gamedev. Now I understand better your motivation for looking into Vecmath.

apajx: You can get into the weeds real fast with this though, why stop at qaternions, why not have any possible Clifford Algebra? kvark facepalms

As a math nerd, this made my day. ❤️