Closed Osspial closed 5 years ago
At the risk of making a worn-out reference, I can't deny that we're at risk of falling into this trap:
Regardless, I think there has been a lot of cool research done in the community, but the only way I can see the fruits of that research reaching a wide audience is if it has some amount of official backing, which we (as the gamedev working group) can provide.
Anyhow, time for my opinions: I think we should take the standard library's approach of building a small, fast, and relatively uncontroversial core that the rest of the ecosystem can build around. With that in mind, I'd like to propose a scope for the standard math crate and guidelines for its design:
Vector
/Point
/Matrix
types. Iterating over designs and moving those types to 1.0
should be our highest priority.
Quaternion
or Euler
or anything relating specifically to rotations. There's a lot of design space to explore there and I think figuring those out is considerably more complex than figuring out the three types listed above.cgmath::PerspectiveFov
.I've got various design issues I've seen in various math crates that I'd like a standard math library to address, and I'll list them here:
.x
, .y
, .z
accessors are a must.nalgebra
-like Deref<Target=Coordinates
, since despite nalgebra
's complexity that's actually a really good solution.nalgebra
has done a lot of really interesting work with embedding math into the type system, but the result is that you need a comprehensive understanding of both to make sense of it.
There are also a few design points that I think are worth discussing, but aren't as cut-and-dry as I've been making the above points out to be. Namely,
euclid
, but I'm not a huge fan of their API, and I think there's a way of adding units without being as intrusive as euclid makes it.Vector
and Point
types into a different type, or should they be the same type? There's points to be made on both sides here, and I haven't managed to convince myself that either approach is better.(I typed this up while Osspial was typing their first reply, so i'll comment on that a bit separately in a moment)
How do we feel about glam? Or at least "the glam
approach" as we'll perhaps call it.
It's very easy to think about.
Now, currently glam is incomplete in some areas:
f32
types not f64
, i32
, and i64
but besides the fact that things are missing how do we feel about the approach?
reply to Osspial's proposal
mint
, and most of the other math libs have optional interop with it. Well, except that mint
has no operations, only data types. Still, a starting point perhaps.Like Lokathor said, we already have the solution thanks to kvark, it's mint
. The lack of operations is a feature, not a bug; it's not a math library, it's a math TYPE library. The intended usage is what ggez
0.5 does, just have all library functions take Into<mint::Whatever>
instead of nalgebra::Whatever
or cgmath::Whatever
. The user uses whatever library they feel like and their types silently vanish into the maw of Into
impl's and it's like the library is using whatever math library they prefer. Works great, all the math libraries that matter already have feature support for it, and it's not hard to add to new libraries.
Sorry! I feel like the problem's already been solved. It's not perfect, since my measurements suggest there IS some run-time overhead to mint
's conversions that doesn't get entirely optimized out, but it's negligible for my purposes.
@Lokathor
How do we feel about glam? Or at least "the
glam
approach" as we'll perhaps call it.* No traits, no generics, just plain structs and functions/methods.
It's very easy to think about.
On a high level, I quite like glam
's approach. However, I'm hesitant to say that we should fully adopt what glam
is doing, since I disagree with their SIMD-centric approach:
Vector
/Point
/Matrix
types for each primitive is unacceptable in a library that's designed for everyone.unsafe
operations on the types. You couldn't, for instance, re-interpret a &[f32]
as a &[Vector<3, f32>]
, since you'd skip every fourth float and floats are 4-byte aligned.
glam
's current approach of "optional SIMD" also means that types don't have a reliable layout, which breaks unsafe code for reasons that I don't feel I need to describe. I realize that inconsistent layout isn't inherent to the idea of SIMD-based types, but it's still an issue I have with the current implementation.Vertex
types, and SIMD storage makes that infeasible.I suppose the base problem I have there is that SIMD-backed types improves speed at the cost causing direct usability issues in every other context, and I don't think that sort of tradeoff is acceptable for a base, standard math library that everyone can build upon.
We kinda have a "standard math" lib, it's called
mint
, and most of the other math libs have optional interop with it. Well, except thatmint
has no operations, only data types. Still, a starting point perhaps.
The fact that mint
is designed as an interop library, and not a base library for direct usage, makes it infeasible for direct use. Admittedly, it still has gotten ecosystem traction so adopting it and expanding upon it may be a decent idea, but I don't hugely like the high-level design of its API and changing it would induce significant breakage.
I disagree on (1) that it has to be field accessors as a must. Particularly, that means that you can't just use a SIMD lane as the rep of a Vec or Mat, which means a performance hit. I'll honestly take the faster version without direct field access, and I bet a lot of others would too.
Eh, different priorities I guess. I've made my point on SIMD so I'm not going to reiterate that, but direct field access makes the API significantly nicer to use in pretty much every way and I don't want to throw that out without extensively proving that it's worth it.
(2) Swizzling is a must? Really? I mean it's neat, but a must? But, sure, it's easy enough to make them all, and it doesn't matter if they're accessors or constructors, you can do both, it's just a lot of copy and paste work basically.
Swizzing is a feature that I've consistently found useful when implementing code, and it's something you notice when it's missing. I don't want to have to think about about using it and explicitly import it - I just want to have it available, and there's really only one way to do it.
@icefoxen
Sorry! I feel like the problem's already been solved. It's not perfect, since my measurements suggest there IS some run-time overhead to
mint
's conversions that doesn't get entirely optimized out, but it's negligible for my purposes.
I think that the fact mint
needs to exist shows how much of a problem there is in the ecosystem. Saying that mint
is a solution is like saying "The Rust standard library doesn't have an array type, but everything takes Into<GenericArrayWithoutImplementations<T>>
, so it's okay" (yes I know that Rust has arrays but that isn't the point). Mint clunkily solves the problem that you can't use some libraries with others, but it doesn't solve the issue is that nobody's designed a solution that everyone can actually use. Maybe that's impossible, but I have a hard time believing that.
Besides, since Mint does so little, it means that you can't actually build higher-level constructs around Mint without sacrificing the ability to sanely design an internal and external API. I can't, say, build a higher-level geometry library around Mint (e.g. a library that provides rectangles, circles, etc. and the ability to transform them) without either throwing away all the math functions every other library provides or clumsily doing conversions internally every time I want to perform any trivial transformation on my datatypes.
I don't particularly care about single-nanosecond losses of performance here. I care about designing APIs that people enjoy using, and standardizing around Mint makes it extremely difficult to do that.
Hot Take: Generics are a necessary evil, not an inherent good. The ideal vec/mat library for sizes 1-4 is probably one that is fully written out over time for all possible combinations and 100% non-generic. There are sufficiently few combinations that it's actually possible to write them all down, so you might as well write them all down (and your re-compilation times will actually improve if you do this).
Now, you have a bit of a point with the SIMD stuff, but I do wonder how often you're doing math operations on all the vertexes in a model, and not just on its transforms (presumably a much smaller amount of data overall). It might not be insane to make a set of model/vertex data and then have the faster SIMD types for CPU-side usage.
I think that the fact mint needs to exist shows how much of a problem there is in the ecosystem.
Contrast with C/C++, or C#, or Python, or literally anything else, where there is either ONE library that everyone uses even when it kinda sucks, or there is >1 library and nobody EVER lets ANYTHING interoperate between different libraries. The fact that mint
even can exist is heckin' amazing.
... it doesn't solve the issue is that nobody's designed a [single] solution that everyone can actually use. Maybe that's impossible, but I have a hard time believing that.
It's impossible to have a single solution that everyone can use, because different people have different goals. Different people just think in different ways. This is okay. People will naturally agglomerate towards the de-facto standard because that's how humans work, but there will always be the outliers that do things differently that work better for some people or use cases. Currently the de-facto standard is nalgebra
, which wouldn't have been my first choice, but mint
means that it is POSSIBLE to have a heterogeneous ecosystem that still works together.
Edit: The current state of math libraries, as I see it, is that there's lots of good choices and they all work together. Go us! :tada:
I mean even my wanting the Fast SIMD version and Osspial wanting the Slow Plain version shows that we need Mint somewhere in the system.
@Osspial Trying to make that ideal API math library is an interesting task for sure, attempted by many in the past. Maybe you'll get better luck at it, who knows. I'd be interested to watch the progress and potentially contribute.
What I don't expect though is for that effort to eliminate the other solutions. People using nphysics
will always use nalgebra
, no matter how good the new hotness is. People would always disagree on proper SIMD usage, on Y up versus down, on million other things (looks like @icefoxen just brought this point as well while I was writing).
So mint
will always be needed.... but this isn't a concern, it works fine.
When I initially raised the concern during the call, it was specifically about cgmath
: it's in a sad state, both in terms of API and maintenance. Writing an entirely new math library may be helpful in the longer term, but in the shorter - we need to find a way to maintain it or declare bankrupsy.
Hot Take: Generics are a necessary evil, not an inherent good. The ideal vec/mat library for sizes 1-4 is probably one that is fully written out over time for all possible combinations and 100% non-generic.
I'm going to have to disagree with you there. Generics are a powerful tool for reducing API surface area, letting you see what APIs work everywhere and which only work in some places at a quick glance without having to go through pages of API docs on different types. Different types for everything fundamentally sacrifices that.
Currently the de-facto standard is
nalgebra
, which wouldn't have been my first choice, butmint
means that it is POSSIBLE to have a heterogeneous ecosystem that still works together.
🤨
...that may have been more snarky than needed, but I guess in general my goal isn't to replace nalgebra
. nalgebra
has a place, and it's certainly widely used, but the complexity it introduces is a deal-breaker for many, including myself.
I'll put some extra emphasis on the following point, since this post is currently a cluttered wall of text: Replacing nalgebra
is an infeasible task, and something we probably don't want to do. Replacing cgmath
, on the other hand, is absolutely feasible, as abandoning it leaves a hole in the ecosystem that currently goes unfilled.
My goal would be to provide a replacement to cgmath
that has a sane API. Maybe the solution to that is to overhaul cgmath
in-tree, pulling out the cruft and simplifying the API in a way that lets other people expand upon it without being overly opinionated.
Edit: The current state of math libraries, as I see it, is that there's lots of good choices and they all work together. Go us! 🎉
I guess my problem is that there are a lot of different math libraries, but none of them actually do what I need them to do. They either sacrifice usability for functionality, aren't actually usable, or are used by so few people that exposing them in a public API is a burden upon everyone else.
I mean even my wanting the Fast SIMD version and Osspial wanting the Slow Plain version shows that we need Mint somewhere in the system.
You could certainly do it cleaner than via Mint. There's no reason a single library couldn't provide both SIMD and non-SIMD types and provide clean interop between the two, so that you can do Vector2 + Vector2SIMD
and have everything Just Work.
When I initially raised the concern during the call, it was specifically about
cgmath
: it's in a sad state, both in terms of API and maintenance. Writing an entirely new math library may be helpful in the longer term, but in the shorter - we need to find a way to maintain it or declare bankruptcy.
Ideally, we can get it to the point where it doesn't need active maintenance. If it's got a clearly defined scope and lets more complex design problems get evolved and solved out-of-tree, I think it'd be in a pretty decent person. It just doesn't do that right now, and the design problems are there and left unsolved.
Out of curiosity, why do we need Mint's non-matrix types when the standard arrays exist and have wider ecosystem compatibility?
Think of them as newtype over arrays
Anyhow, time for my opinions: I think we should take the standard library's approach of building a small, fast, and relatively uncontroversial core that the rest of the ecosystem can build around.
I guess my problem is that there are a lot of different math libraries, but none of them actually do what I need them to do. They either sacrifice usability for functionality, aren't actually usable, or are used by so few people that exposing them in a public API is a burden upon everyone else.
I don't have the full context of what you actually need them to do and I think it would be helpful for readers of this thread if you would be describing what these things are in a bit more detail and how these things will be achieved by the new library (and cannot be achieved by current libraries like nalgebra/glam/etc.).
For example, the initial description mentions fast, but then SMID approach that glam
seems to be critiqued. While the critique might be fair, how does that affect speed for this new library if we don't use SIMD? Do you propose another approach to SIMD ? Do we want to not use it at all? What's the proposal? I already highlighted the lack of a --ffast-math
equivalent in Rust as far as I can tell.
What are the speed targets? glam
-level speed? slower/faster. By how much?
Note that not all platforms support the same SIMD, or even SIMD at all. Particularly, WASM doesn't yet support SIMD, though there's progress in this area with a proposal written out, and there's experimental implementations, so we might have something in probably a year or two.
So, not all libs want to try to express themselves as SIMD operations, and then they'll just have very different runtime profiles based on what LLVM can divine about what's going on or not.
Other libs want to try to express themselves as SIMD and have fallbacks when it's not available for that platform (get it together ASM intrinsics team!!).
Two points I want to highlight:
glam
person made a benchmark suite, mathbench, and I haven't looked into it to see how realistic the benchmarks are and such, but that could be an area to investigateWhile I'm here I guess I'll give a ping to @bitshifter and see if they even want is swooping in on their lib to drown them with issues and PRs and such. If they want to just do their work in peace that's okay too.
Bonus link! LLVM Floating Point Docs
@AlexEne Thank you for that feedback. I'll post some concrete responses later, but first I'm going to do some more research into the problem space here and come up with more specific issues and solutions. That'll help me come up with healthier conversation points, since aggressively spouting opinions without diving into the full context around those opinions hasn't entirely worked out so far :P.
While I'm here I guess I'll give a ping to @bitshifter and see if they even want is swooping in on their lib to drown them with issues and PRs and such. If they want to just do their work in peace that's okay too.
@Lokathor do you mean mathbench
, glam
or both? In any case I'm happy to receive issues for either, probably worth discussing anything in an issue before making a PR.
glam's current approach of "optional SIMD" also means that types don't have a reliable layout, which breaks unsafe code for reasons that I don't feel I need to describe.
That's not exactly true. If SSE2 is not available then types that would have been 16 byte aligned are #[repr(align(16))]
so that size and layout remains consistent. If you use the scalar-math
feature flag then no SIMD is used and no alignment is forced. It would be best to either always use scalar-math
or never use scalar-math
feature with glam. Primarily it's there to test the scalar code path but also if people don't want SIMD/16 byte alignment/unsafe.
Out of curiosity, why do we need Mint's non-matrix types when the standard arrays exist and have wider ecosystem compatibility?
For things like vectors it's obvious. For matrices - there become choices of having nested fixed-size arrays or just flatten everything. For quaternions - there is a disagreement about the order of W with regards to the other components.
All in all, it's hard to draw the line where fixed-size arrays should be used, so we went ahead and had dedicated types for everything.
Condensed consensus from #24 call:
cgmath
is widely used still, we should probably not try to rewrite itVector
versus Point
) are in the way. Simple free-standing functions work better for documentation, read-ability, and implementation.Vec3
). Yet, it would be useful to have fixed-function math available at some point, which maps to GPU.I contest point 2 a little bit. A library that includes SIMD support will generally already do it by hand.
On July 24, 2019 3:51:43 PM CDT, Lokathor notifications@github.com wrote:
I contest point 2 a little bit. A library that includes SIMD support will generally already do it by hand.
Is it possible to know how to write math that the compiler knows how to translate to SIMD decently, or is it always going to be LLVM Black Magic? -- Sent from my phone. Please excuse my brevity.
The compiler is only good at auto-vectorization with plain "array like" code, it's easy to out perform when matrix ops are involved because you have to do things like shuffle around lanes that isn't obvious to its auto-vectorization system.
To extend a bit on the points I mentioned in the last meeting:
nalgebra-glm
really makes it easy to write the necessary operations used for example when writing a simple rendering engine (perspective, matrix mul, rotation, a few vec ops, ..). The freestanding function approach also wins over methods in terms of readability for more complex operations IMO (cgmath sample: https://github.com/msiglreith/panopaea/blob/master/panopaea/src/ocean/empirical.rs#L118-L147)
As can be seen in mathbench
nalgebra seems to perform quite poorly (compilation settings?). glam
on the other hand outperforms the other libraries, but I would expect higher and more consistent performance improvements when trying to introduce SIMD to a user application. Intrinsics and data layout should give a higher performance boost. Libraries like pathfinder
seem to also have their own SIMD abstractions for this. Once more complex SIMD operations are required, 'escaping' from a math library which internally handles all the data layout seems tricky to me.
Therefore I feel that there are ~3 different use cases for math libraries in the gamedev ecosystem:
nalgebra-glm
nalgebra
simdeez
? other libraries?I'm not sure why nalgebra
performs poorly on some operations, I haven't investigated it at all. Compilation settings wise, it's just whatever cargo bench
defaults to (just release I think). I did try with full LTO once but it didn't make a huge difference.
Escaping the math library is easy. glam
types that use SSE2 can convert (cast) to and from __m128
.
I completely agree that you will get better performance by layout your data in a SIMD friendly manner and using SIMD intrinsics directly (or via a wrapper like packed_simd
) however as glam
demonstrates you can get a good performance improvement over scalar code with a SIMD backed math library.
There's also the middle ground of loading a f32
vector into SIMD registers for operations, so size and layout is standard but potentially performance is better. I haven't tried this myself, @Lokathor has with hektor
, I don't know how performance compares.
One thing I've tried to do with glam
unrelated to SIMD is follow the Rust API guidelines https://rust-lang-nursery.github.io/api-guidelines/. In particular they recommend methods over functions, see https://rust-lang-nursery.github.io/api-guidelines/predictability.html#functions-with-a-clear-receiver-are-methods-c-method for rational.
the default profile sections do not include LTO for benchmarks, but yeah it doesn't always make a difference (that's why it's off by default even for release and benchmark mode).
oh, yeah, the other points:
on my machine hektor fell just behind nalgebra with all the storing and loading, but i only really checked mat4*mat4. for others it went faster. so the results were confused and i just decided to use nalgebra-glm instead of bothering to check much more at the time.
the rust style guidelines are just a few people's opinions and do not particularly lead to anything other than "it followed the style guide". feel free to ignore them when it makes an api better
I've not investigated yet why nalgebra
and cgmath
perform worse than glam
in mathbench
but both could likely be improved to match glam
. This could imply to add the right SIMD routines when auto-vectorization does not do the trick, which is possible even in generic code by doing some kind of "pseudo-specialization" (like that, those are if
statements that are extremely straightforward for the compiler to remove in release mode). The only case where both nalgebra
and cgmath
can't expect to beat glam
is for 3D vectors and matrices because of the fact we don't use 4 components means an extra load must be performed by the processor to get the components into an XMM register. But the waste of space in glam
has its own drawbacks already discussed by other comments on this issue.
In any case, while I agree performance is extremely important, I think it should not be so significant regarding the design of an ideal math lib for gamedev. If we were to design such a matrix/vector lib, we should first focus on the features and the API. Performance is only a matter of putting enough time into it so things get auto-vectorized better, or by adding manual vectorization in hotspots. It should only rarely affect the actual API.
Now, an ideal API is extremely difficult to come up with since different people require different features. Perhaps we could start by listing what are the solutions on other languages to see where others have converged in term of gamedev math. We should also take a look at matrix/vector modules from popular frameworks like Unity. An ideal lib for gamedev should probably cover most of the low-level linalg features from those frameworks. Assembling a state of the art of popular gamedev matrix/vector frameworks would be very valuable to get some ideas and directions.
Regarding the API of nalgebra, it is quite complicated because of generics. It is designed so it will get much better as the Rust language evolves with const-generics and with specialization, but we will probably not have both features before at least a couple of years. Though one way to very significantly improve the quality of nalgebra's doc in a short term is by fixing that four-years-old cargo bug: https://github.com/rust-lang/rust/issues/32077
So, as the nalgebra-glm
docs state: the GLM C lib uses a lot of overloaded function calls, which rust doesn't have. nalgebra-glm
uses little name changes to evade that problem and stay with a free function oriented code base, which is often preferred for some kinds of math. At the same time some folks want methods, and I agree that they read better for some things.
I think the easiest way to do this is just do both. It's not hard it just takes up some time to set up.
example with abs
since it's near the top of the alphabetical list of things in the nalgebra-glm
docs:
impl Vec3 {
pub fn abs(self) - Self {
Self::new(self.x().abs(),self.y().abs(),self.z().abs())
}
}
pub trait CanAbs {
fn abs(self) -> Self;
}
// internal use only, so ignore the funny capitalization
macro_rules! impl_CanAbs {
($t:ty) => {
impl CanAbs for $t {
fn abs(self) -> Self {
Self::abs(self)
}
}
}
}
impl_CanAbs!(Vec3);
impl_CanAbs!(f32);
pub fn abs<T: CanAbs>(t: T) -> T {
t.abs()
}
// both of these "just work"
abs(-3.9_f32);
abs(my_vec3);
Self
resolves to the trait in a default trait impl so it's just recursive when you write the default method. Using a written out impl makes Self
be the type, so it works out to use the same expression every time, and then the macro just speeds it up.Not all functions are so easy to do like this, but many are. I'll give it a try with more examples using the Hektor repo tonight or tomorrow.
@Lokathor I personally like it when there is only one way to do things. I don't want to spend time thinking about whether I should use abs(my_vec3)
or my_vec3.abs()
when coding. Maybe one day I decide to go for the first one, the next day for the second one, and I end up with inconsistent code.
The problem is choice.
I'll make it a feature flag :P
I don’t know whether the splines has its place here, but it exists and can be very useful for people doing animation / video games.
Adding a redundant API behind a feature flag gives you a two ways to do things, plus the further complication of one of them sometimes not working for reasons that will not be obvious to all users.
On Fri, Jul 26, 2019, 02:05 Lokathor notifications@github.com wrote:
I'll make it a feature flag :P
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/rust-gamedev/wg/issues/25?email_source=notifications&email_token=AAAZQ3TEFLQ7JEXCCEZQJ7LQBK45RA5CNFSM4IDZ2A7KYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD237HWA#issuecomment-515371992, or mute the thread https://github.com/notifications/unsubscribe-auth/AAAZQ3QTPHVJ5SUU7BICUU3QBK45RANCNFSM4IDZ2A7A .
The API design constraints so far:
1) All the float ops for f32
and f64
are methods. This is just a fact of how core
is designed and we can't change it.
2) Many people will naturally assume that if you can do my_float.abs()
then you can also do my_vec.abs()
as well.
3) Many other people want to be able to write abs(val)
so that their code looks properly like math instead of seeing val.abs()
all over the place.
4) One person said that they don't want to have possibly two ways to do things
The only way that I see to cater to all of this is:
Might be worth noting that Rust has a limited form of UFCS: https://doc.rust-lang.org/1.5.0/book/ufcs.html This means one [can have](https://play.rust-lang.org/?version=stable&mode=debug&edition=2018&code=trait%20Abs%20%7B%0A%20%20fn%20abs(%26self)%20-%3E%20Self%3B%0A%7D%0Afn%20foo(x%3A%20%26impl%20Abs)%20%7B%0A%20%20x.abs()%3B%20%2F%2F%20method%20syntax%0A%20%20Abs%3A%3Aabs(x)%3B%20%2F%2F%20function%20syntax%0A%7D) this:
trait Abs {
fn abs(&self) -> Self;
}
fn foo(x: &impl Abs) {
x.abs(); // method syntax
Abs::abs(x); // function syntax
}
The sad part is that Abs::
needs to present, we can't import a trait method in scope otherwise. But technically still a function syntax.
Personally, having only one way of doing each operation (counting optional ones enabled by feature flags) is more important than exactly following how the standard library scalar types work. It is also worth thinking through how comprehensible the generated rustdoc documentation will be for the library
Alright, I dusted off the hektor
repo and rebooted it and gave it a very very simple setup to try out and see how abs
might feel to look at in the docs.
https://docs.rs/hektor/0.0.3/hektor/index.html
Searching for 'abs' shows a whole bunch of different ways to compute the absolute value which is as expected but doesn't really give any hint as to the "recommended" way to do the operation.
It is also a bit unfortunate that the [src] links are kind of unhelpful. The free function has basically a pass-through body that just calls the trait and otherwise tells you nothing, the trait is implemented inside a macro (which makes it slightly harder to understand) and ultimately just calls Self::abs(), and then only once you check the method on Vec2 do you actually get a function body that actually does something.
Some of that could certainly be better explained in the top level documentation. However, that's basically how it works out with core and libm as well:
it's, to an extent, something we just kinda have to live with I think.
Searching for 'abs' shows a whole bunch of different ways to compute the absolute value which is as expected but doesn't really give any hint as to the "recommended" way to do the operation.
Math library documentation should not explain how to use Rust. If a user sees in a library's docs a trait for a functionality, a method on a type implementing that functionality, and a free function for using that trait's functionality freely, they should be able to understand this themselves and choose how to access the implementation for their type. There is no one "recommended" way to choose between these three routes of executing a single exposed functionality, some of them work better in some cases than others. Documentation can show how to write out each of these forms, but it can't possibly explain to the user what to do with their application code with any efficacy without completely drowning out actually pertinent information.
I think it's important to note that support all number types is important. It's not enough to only provide f32
. Glam and a lot of other vector libraries fall short here.
I read through the thread. There is a lot of great stuff here already, so I made notes as I went:
Are any of the ideas here being prototyped yet?
hektor has 0.2.1 out today. Mostly just impls for getting data into and out of the types, as well as trait impls for operations.
Actual "graphics math" ops aren't implemented yet, though you can see in the issues I've got all sorts of roadmap notes.
mathbench
and it performed basically the same as glam on euler 2d and euler 3d. more benchmarks to come.To add to @aclysma's comment on the f32 versus other types, my personal experience over the last 15 years in gamedev has been f32 is used almost exclusively. The only exception was one time porting to some early mobile hardware that didn't have a FPU. I did poll some of my workmates also, some used int for doing screen coords for simple 2d stuff at home. Another said on one title they used int for some stuff because it uses a different part of the CPU which meant they could get better throughput using both float and int. That game was on PS3 and Xbox 360, not sure what that particular optimisation was for. The engine I've worked on the most at my current job was designed for building WoW scale MMORPG's and it uses f32, not f64. In any case, the usecases for non f32 is kind of niche from what I've seen. The main thing I think is that the kind of operations you will perform on int will be a subset of float, so being generic on type needs to deal with those differences somehow. Making things generic in general increases complexity of implementation and interface, which is unnecessary for the most common use case.
Conversely, in my hobby project I'm making very heavy use of linear algebra on f64
due to my worlds being on the order of 1:1 scale earthlike planets. My project would likely not be possible if not for nphysics being generic on float type. Of course, I'm also very happy with nalgebra and have no plans to change in the forseeable future.
The original complaint sounds like the existing solutions have quirks, are hard to understand, or are missing functionality. This doesn't sound like it would be fixed at all by introducing additional libraries that are also missing functionality, and have quirks, like not supporting more than just f32
and having opaque sizes. This sounds like it'll just introduce the situation where at best, you're using cgmath
/nalgebra
in addition to some specialized library like glam
/hektor
. What will probably happen is most people will just stick to using cgmath
/nalgebra
because they just already work for the float case.
I'm working on fast TrueType font rendering and parsing which is entirely in fixed point and integer types (and integer matrix math). Even doing the actual rasterization is significantly faster with integer approximations (non-SIMD integer math outperforming SIMD float math in this case). cgmath
and nalgebra
are the only viable libraries for this.
I'm specifically using cgmath
right now for my crates, and will for future crates, so I'm probably on the wrong side of history here, but it's nice to get the free interoperability.
I'd argue that professional gamedev requires a specialist library, so it depends on what problem people are trying to solve here.
- Weird, idiosyncratic design that simultaneously does too much and too little, and tries to shoehorn in genericism with complex traits that are difficult to internalize.
- Documentation [that] is nigh impossible to understand without extreme patience and a deep understanding of generics.
@bitshifter those were the problems outlined. I'm concerned that the takeaway from that was only supporting f32
is fine. I'm also not saying that only supporting f32
is wrong, there's certainly a place for those libraries. I think for an eco-system to grow around something new, it should at least match what already exists to some degree.
To be clear I'm not against supporting other types. I'm of the opinion that supporting other types via generics is going to introduce complexity which for the majority of users who only need f32 is unnecessary cognitive overhead.
I'm interested in what people consider specialist, glam uses simd storage and has alignment requirements so that I can understand, but AFAIK hektor uses scalar types for storage and only uses simd internally for some operations, is that really specialist?
Uh, so I started hektor a long time ago and got some bad benchmarks and so I just did other stuff for a while until I saw you post about glam and then i was driven to pick it up again and here we are.
hektor and glam are ultimately like 98% identical libs in their approach to things. It is in some sense goofy that we don't just team up directly but I work on hektor as a way to learn the math involved so I'm gonna keep chipping away at my roadmap for as long as I need to get it all done.
the biggest difference between hektor and glam is that hektor is no_std
The big math libraries are
cgmath
andnalgebra
. Both of them have deal-breaking flaws for many, namely (imo):cgmath
has a weird, idiosyncratic design that simultaneously does too much and too little, and tries to shoehorn in genericism with complex traits that are difficult to internalize.nalgebra
stretches the type system so far that the documentation is nigh impossible to understand without extreme patience and a deep understanding of generics.As a result, several people have gone and made their own math libraries that try and solve those problems, which don't get adoption because the ecosystem hasn't grown up around them. I'd like this issue to house discussion on how exactly we can go about designing a library that addresses those issues, while developing enough consensus that we can potentially replace both of the current libraries.