Open toji opened 2 years ago
I agree that this is an important use case and thanks for reminding me to investigate it! I've been looking into this today and it's unfortunately not as trivial as it was in the prior version of the library if I want to stick to the Typescript inheritance patterns I've been using so far. Fiddled around with Mixins for a bit to try and make it work but it starts to really mess with the type checking and docs generation. I'm starting to think I might just do a really dumb, basic thing and have a build script that just copies the file and does a search and replace
Float32Array
->Float64Array
. š
Yea I really had expected something like this to work:
const Vec2Factory = <T extends Float32ArrayConstructor | Float64ArrayConstructor>(TypedArray: T) => {
class Vec2 extends TypedArray {
// ... snip ...
}
// Instance method alias assignments
// ... snip ...
// Static method alias assignments
// ... snip ...
return Vec2;
}
export const Vec2_F32 = Vec2Factory(Float32Array);
export const Vec2_F64 = Vec2Factory(Float64Array);
export const Vec2 = Vec2_F32;
export const vec2 = Vec2_F32;
The interesting bit is that Typescript doesn't present an error with just the above snippet. It just doesn't extend it and add any of the methods (i.e. set
) to the class.
The problem I see is that the Float32Array
and Float64Array
interfaces are incompatible mainly because the type itself is used in a parameter or returned in various methods (i.e. filter
). So I created a new FloatArrayInterface
generic that accepted Float32Array | Float64Array
.
Then I was able to get the following error:
A mixin class must have a constructor with a single rest parameter of type 'any[]'
So with even more fiddling around the conctructor I got this to work but it may seem a bit odd.
import { EPSILON, FloatArray } from './common.js';
import { Mat2Like } from './mat2.js';
import { Mat2dLike } from './mat2d.js';
import { Mat3Like } from './mat3.js';
import { Mat4Like } from './mat4.js';
interface FloatArrayInterface<T extends Float32Array | Float64Array = Float32Array | Float64Array> {
readonly BYTES_PER_ELEMENT: number;
readonly buffer: ArrayBufferLike;
readonly byteLength: number;
readonly byteOffset: number;
copyWithin(target: number, start: number, end?: number): this;
every(predicate: (value: number, index: number, array: T) => unknown, thisArg?: any): boolean;
fill(value: number, start?: number, end?: number): this;
filter(predicate: (value: number, index: number, array: T) => any, thisArg?: any): T;
find(predicate: (value: number, index: number, obj: T) => boolean, thisArg?: any): number | undefined;
findIndex(predicate: (value: number, index: number, obj: T) => boolean, thisArg?: any): number;
forEach(callbackfn: (value: number, index: number, array: T) => void, thisArg?: any): void;
indexOf(searchElement: number, fromIndex?: number): number;
join(separator?: string): string;
lastIndexOf(searchElement: number, fromIndex?: number): number;
readonly length: number;
map(callbackfn: (value: number, index: number, array: T) => number, thisArg?: any): T;
reduce(callbackfn: (previousValue: number, currentValue: number, currentIndex: number, array: T) => number): number;
reduce(callbackfn: (previousValue: number, currentValue: number, currentIndex: number, array: T) => number, initialValue: number): number;
reduce<U>(callbackfn: (previousValue: U, currentValue: number, currentIndex: number, array: T) => U, initialValue: U): U;
reduceRight(callbackfn: (previousValue: number, currentValue: number, currentIndex: number, array: T) => number): number;
reduceRight(callbackfn: (previousValue: number, currentValue: number, currentIndex: number, array: T) => number, initialValue: number): number;
reduceRight<U>(callbackfn: (previousValue: U, currentValue: number, currentIndex: number, array: T) => U, initialValue: U): U;
reverse(): T;
set(array: ArrayLike<number>, offset?: number): void;
slice(start?: number, end?: number): T;
some(predicate: (value: number, index: number, array: T) => unknown, thisArg?: any): boolean;
sort(compareFn?: (a: number, b: number) => number): this;
subarray(begin?: number, end?: number): T;
toLocaleString(): string;
toString(): string;
valueOf(): T;
[index: number]: number;
}
type FloatArrayConstructor = { new (...value: any[]): FloatArrayInterface, BYTES_PER_ELEMENT: number };
/**
* A 2 dimensional vector given as a {@link Vec2}, a 2-element floating point
* TypedArray, or an array of 2 numbers.
*/
export type Vec2Like = [number, number] | FloatArrayInterface;
function Vec2Factory<T extends FloatArrayConstructor>(TypedArray: T) {
/**
* 2 Dimensional Vector
*/
class Vec2 extends TypedArray {
/**
* The number of bytes in a {@link Vec2}.
*/
static readonly BYTE_LENGTH = 2 * TypedArray.BYTES_PER_ELEMENT;
/**
* Create a {@link Vec2}.
*/
constructor(...values: any[]) {
switch(values.length) {
case 2:{
const v = values[0];
if (typeof v === 'number') {
super([v, values[1]]);
} else {
super(v as ArrayBufferLike, values[1], 2);
}
break;
}
case 1: {
const v = values[0];
if (typeof v === 'number') {
super([v, v]);
} else {
super(v as ArrayBufferLike, 0, 2);
}
break;
}
default:
super(2); break;
}
}
// ... snip ...
}
// Instance method alias assignments
// ... snip ...
// Static method alias assignments
// ... snip ...
return Vec2
}
export const Vec2_F32 = Vec2Factory(Float32Array);
export const Vec2_F64 = Vec2Factory(Float64Array);
export const Vec2 = Vec2_F32;
export const vec2 = Vec2_F32;
The args on the constructor are listed as any type but when you actually try to create a new Vec2 it shows the appropriate types from the super class (ArrayBufferLike), and will error if you try something else.
I think you would probably just replace your FloatArray
with the above interface. And then wrap each class in factory as I have done here. I can submit a PR if this makes sense. I honestly don't know as I am not as familiar with Typescript as I am with Javascript.
*Edit: I see now this does mess up the constructor parameters because we had wanted to allow Vec2(1, 2)
and this no longer works (it throws Argument of type 'number' is not assignable to parameter of type 'ArrayBufferLike'
). I can try to poke around some more but I don't think Typescript is going to allow this mixin. https://github.com/microsoft/TypeScript/issues/37142
Alternatively, if you are ok getting rid of the new
keyword on the api side you can do this:
// ... interface snippet from above ...
function Vec2Factory<T extends FloatArrayConstructor>(TypedArray: T) {
/**
* 2 Dimensional Vector
*/
class Vec2 extends TypedArray {
/**
* The number of bytes in a {@link Vec2}.
*/
static readonly BYTE_LENGTH = 2 * TypedArray.BYTES_PER_ELEMENT;
// ... constructor removed ....
// ... snip ...
}
// Instance method alias assignments
// ... snip ...
const factory = (...values: [Readonly<Vec2Like> | ArrayBufferLike, number?] | number[]) => {
switch(values.length) {
case 2:{
const v = values[0];
if (typeof v === 'number') {
return new Vec2([v, values[1]]);
} else {
return new Vec2(v as ArrayBufferLike, values[1], 2);
}
}
case 1: {
const v = values[0];
if (typeof v === 'number') {
return new Vec2([v, v]);
} else {
return new Vec2(v as ArrayBufferLike, 0, 2);
}
}
default:
return new Vec2(2);
}
}
factory.BYTE_LENGTH = Vec2.BYTE_LENGTH;
// Static method alias assignments
factory.sub = Vec2.subtract;
// ... snip ...
return factory;
}
export const Vec2_F32 = Vec2Factory(Float32Array);
export const Vec2_F64 = Vec2Factory(Float64Array);
export const Vec2 = Vec2_F32;
export const vec2 = Vec2_F32;
import { Vec2_F64 as Vec2, vec2 } from './gl-matrix/vec2.js';
const a = Vec2(1, 2);
const b = Vec2([1, 2]);
a.add(b);
vec2.sub(a, a, b)
I like using new
for a couple of reasons. For one, it's using things the "right way", and I find that it's generally better to try to stick with the design patterns as their meant to be used rather than get too cute with JavaScript tricks, in that as browsers evolve they are more likely to optimize the common patterns and more likely to break or deoptimize One Weird Trick-style code.
It also serves as a pretty clear indicator to devs about the type of work being done. new
"feels" more expensive than a function call (despite the fact that that's often laughably untrue), and as such provides a mental nudge to use it accordingly.
That said, I'm still open to changing up the patterns here if there's a clear benefit to doing so, but in this case I'm not convinced that that jumping through template hoops is actually going to yield a better, more usable result than just doing the dumb copy thing. (I tried out that route here, we'll see how I feel about it after a few more revisions.)
The problem I see is that the Float32Array and Float64Array interfaces are incompatible mainly because the type itself is used in a parameter or returned in various methods (i.e. filter). So I created a new FloatArrayInterface generic that accepted Float32Array | Float64Array.
I came to the same realization, so I added a type FloatArray = Float32Array | Float64Array
that I'm using for all the Vec/Mat*Like
types now. It's simpler than what you proposed, though, and there's aspects of the interface direction that you showed that I kind of like, so it's worth experimenting more. I'm also seeing some aspects of the code you posted that are variants on what I was trying that might make a difference, so I'm definitely going to keep toying with it!
Thanks for the feedback!
Just published a new beta version to npm for all you fine people to try out!
There are still things I'm investigating, but there was also enough new here that I felt it was worthwhile to push a new version and get feedback. I'm trying to both pull in feedback from this thread and address a variety of longstanding issues files against the library.
Float64Array
backing of all types (See README for usage details)Mat4.normalFromMat4()
method (calculates the transpose inverse and drops the translation component)order
arg back to Quat.fromEuler()
Mat4.frustumZO
, allowed frustum
methods to accept an infinite far plane| null
to return types of methods that may return null
. (Such as matrix inversion)toRadian()
and toDegree()
to common.ts
Vec*.abs()
static and instance method to all vectorsmain
, module
, exports
, and types
to package.json
Oh no, if WebGPU uses left-handed, I blame you, @toji š
Left-handed matrix math should have been expunged from the world years ago... In fact it had been until some DirectX software engineer in the 90s who never took physics thought "I'll just do this my own way".
WebGPU doesn't make any assumptions of handedness that your app uses. Typically you just define whatever coordinate system you want, and then finally apply the projection matrix which also bakes in the transformation from the coordinate system used by your app to NDC, which in WebGPUs case is indeed left handed. But absolutely nothing is stopping you from writing your entire app in a right-handed coordinate system of your choice.
@toji As WebGPU has z defined from 0..1 in NDC, it also opens up the possibility to reverse z order, which I assume a lot of us will be doing.
I am a little torn here on how much I think gl-matrix should assist in enabling this because it is simply a matter of baking in another transform into the projection matrix produced by gl-matrix. I think the current solution is fine, and it is ok to be opinionated and not expand the API to include handedness when creating the perspective matrix, but maybe it should be documented? Also, if a lot of users struggle with this, then maybe we could add some convinience methods to Mat4 to convert to and from other coordinate systems if we can find an API that makes sense. I am sure you have considered this though, what do you think?
Hi @toji.
I'm finally able to get around to investigating the state of beta.2
. I can verify that the tree-shaking changes are valid over beta.1
, however I'd like to once again discuss a better package.json
configuration as per several of the points brought up prior discussion. This involves an efficient separation of the classic and modern API (and f64 variants), type declarations, and package.json
configuration.
I'll create a fork of beta.2
and put together what I think is the best set of solutions including a really slick TypeDoc / documentation example that accurately represents the contents of the package. I've put in a bit of time working on TypeDoc tooling in the past year for automatic API docs from a well configured package.json
w/ types. I expect this work to be done sometime next week, so do hope you can review and consider this effort as a proposed way to distribute gl-matrix
.
I've been shipping my fork of beta.1
for the last year with modern API only for my larger framework that re-exports gl-matrix
.
I'm definitely interested in taking a look, @typhonrt! I'll admit that futzing around with package.json
and friends is one of my least favorite aspects of library maintenance, simply because it's often opaque to me how some of the values in there affect other people's workflows. I'm happy to consider ways to make it more flexible and less fragile.
I'm definitely interested in taking a look...
Almost done... I've taken a very thorough approach to this effort to deliver a rock solid option to review that is 100% modern continuing off what you have done so far vs outright replacement in the build process. Hopefully tomorrow / maybe Monday. Just mentioning this small delay as if what I present is approved of course merging w/ the glmatrix-next
branch will be much easier if no commits are made there at this time. I'm not expecting that per se, but just a consideration.
Edit (6/9/24): I've been putting together a video overview of the proposed v4 distribution I've created. I finished all of the coding / maintenance last week. Just dropping the code will not explain the problems of "beta2" and my proposed changes. The video & code drop will be available this week.
Alright @toji et al.
I have got all of my proposed distribution changes complete along with a video overview due to the comprehensive nature of the work involved. This was a lot of work and I treated the process with full due diligence in the aim of speeding along the transition of gl-matrix
to a fully released version 4. There is a lot here. I came up with a way to cleanly handle the new 64-bit variation of the API introduced in beta.2
, fixed all Typescript errors in the repo, general code cleanup (formatting of tests), ensured full support of CommonJS / ESM Node distribution w/ all types available, added new Typescript ambient module declarations for the swizzling API, generated API docs that completely represent the library and more. It's a lot to take in really, but this effort fully modernizes gl-matrix
and IMHO makes it production ready.
A video overview is available here that discusses the changes in reasonable detail: https://www.youtube.com/watch?v=dZbk5E9nTbw
I don't often make large potential PRs like this to projects without prior negotiation, so hopefully this is well received; also, the first time I've made a video on the process to describe a potential PR. I found that this is likely an easier way to kickstart a discussion. I do hope you can take the time to review things and continue the discussion. My motivation as a downstream consumer of gl-matrix
is to make all of the libraries I depend upon as rock solid as possible in an effort to also make my framework rock solid.
The approaches taken show a very modern way to release a Node package using Typescript and NodeNext
module resolution. Hence my PR and single commit is titled "NodeNext Overhaul".
Resources
Main fork repo: https://github.com/typhonjs-svelte-scratch/gl-matrix-beta2/tree/glmatrix-next
New and complete package API Docs: https://typhonjs-svelte-scratch.github.io/gl-matrix-beta3-dist/
There is a Github distribution for this release that can be used right now by assigning gl-matrix
in package.json
in dependencies
as follows:
{
"dependencies": {
"gl-matrix": "github:typhonjs-svelte-scratch/gl-matrix-beta3-dist#glmatrix-next"
}
}
I am more than willing to take the time to fully discuss this potential PR and work with you on getting gl-matrix
v4 to a full production release.
This is really fantastic, @typhonrt! I'm blown away by the amount of effort you've put into this, thank you!
I watched through your video, took a look at the generated docs, and tried to use your build in my own code and have a couple of initial bits of feedback:
First, I wasn't able to get your build to work in one of my more recent project, and it seems like it's mostly due to the dist/_lib/f32/* files using non-relative imports. Specifically it got caught on import { GLM_EPSILON } from 'gl-matrix/common';
in multiple files. I imagine this should be relatively easy to sort out, but it does highlight a priority for me: As much as possible I want the library to be usable in node and the browser without requiring a build step on the part of the app. (It's fine if the library itself needs a build step, that's pretty much a given.) To be honest this has been one of the most difficult part of not just developing this library but web dev in general lately. Everyone uses different environments with different rules about things like module path resolution and it's really easy to pull in a library that simply doesn't work because they use a different environment than you.
It may not be possible to satisfy every possible build environment, but I think this particular issue will be easy to fix and mostly want to ensure that the library stays as environment-neutral as possible.
The other issue that came up was I noticed that in the (much improved!) vector docs there's now lots of entries for every possible swizzle, which makes it difficult to browse for the more commonly used methods and attributes. I'm curious if you have any thoughts on ways that we could document the swizzles but have them separated out from the rest of the vector docs. This is tricky because I think the work that you've done to make the swizzles work better with the type system overall is amazing, but I suspect it also makes it harder to handle them in a unique way. In any case, I just don't want them harming readability of the docs.
But putting some of those concerns aside, I really can't compliment you enough on the effort you've put in here! I nodded along with most of your video explanation, having run into some of the problems you pointed out before and being surprised by others simply by virtue of not having much experience with building packages for environment XYZ. I'm very happy to work with you to get over some of the remaining hurdles and roll your updates into the repo, as it feels like it will offer a more robust developer experience all around. Thank you again, and I look forward to working with you to make v4 production ready!
I'd be glad to address the concerns you brought up @toji
The other issue that came up was I noticed that in the (much improved!) vector docs there's now lots of entries for every possible swizzle, which makes it difficult to browse for the more commonly used methods and attributes.
I have made a 2nd commit that groups the swizzle API additions in the vector classes by the Swizzle API
category using @category
. I also corrected BYTE_LENGTH
with it now being placed in the Static
category and created a Constructor
category. I added typedoc.json
for clearer configuration of TypeDoc and added defaultCategory
and categoryOrder
for a consistent category order in the classes that use @category
. Check the refreshed example of the API docs to confirm if all looks good on the docs front.
Edit (post docs / 2nd commit): One thing I did notice though is that using a default category as Member
is not necessarily ideal with the inherited symbols from the backing array class when inherited
is selected in the docs settings. With TypeDoc it seems once you start using @category
it does group all symbols from the inherited class in the default category. It might be best to add @category
to all methods with @category Methods
then anything inherited from Float32Array
, etc. gets put into a default Other
category.
First, I wasn't able to get your build to work in one of my more recent project, and it seems like it's mostly due to the dist/_lib/f32/* files using non-relative imports. Specifically it got caught on
import { GLM_EPSILON } from 'gl-matrix/common';
in multiple files. I imagine this should be relatively easy to sort out, but it does highlight a priority for me: As much as possible I want the library to be usable in node and the browser without requiring a build step on the part of the app. (It's fine if the library itself needs a build step, that's pretty much a given.) To be honest this has been one of the most difficult part of not just developing this library but web dev in general lately.
I agree that there are challenges here and they should be addressed. The Node package in the first commit that I created does require a bundler for web distribution. I should note though that using NodeNext
module resolution and sub-path exports does solve the clean implementation of the 64-bit API, so it does have value. I'm mainly in the Node / Browser / Bundler camp. The demo I did show of my framework running in the browser does go through the bundling process without Babel as ES2022 (94% of installed browsers support) is required for my framework.
The solution is to create all inclusive ESM CDN bundles of both variations of the library. I was going to do this, but didn't want so many moving parts in this potential PR, but can definitely address this. It certainly is possible to involve Babel if you really want to cast a wide net in the CDN bundles. Now IMHO these CDN bundles should be ESM and loaded from a standard import route pointing to the bundle or using import maps
. The world as of this moment is firmly tilting towards ESM.
Edit: A UMD bundle should be provided; previously I mentioned not providing one.
My general idea here is to create a dist-cdn
folder that contains separate bundles that are all inclusive for the 32/64-bit variations of the library. This also brings up general concerns around consumption in alternate runtimes like Deno and Bun. Neither of which I have used yet personally. There are several ESM CDNs that automatically deploy Node packages like esm.sh, jsdelivr, and unpkg. Deno to my understanding uses esm.sh
. Just like Typescript lagging behind on modern Node features like sub-path exports (for ~4 years!) my understanding is that not all ESM CDNs are created equal at this point and may not support correctly sub-path export based Node packages.
However, it's my general understanding that one can handle this by setting main
and types
fields in the top level of package.json
to the ESM CDN bundle and most of those ESM CDNs will refer to that instead of the exports
field. Any consumption via Node / bundler use still refers to exports
over main / types
, so that is fine. There are also specific fields one can add for the various ESM CDNs to specifically point them at an entry point. A minor complication there with only having one entry point for some of these ESM CDNs is that this will refer to the 32-bit API. There will be a specific 64-bit bundle, but that will have to be resolved with a direct path loading on the CDN in question.
One of the big pains to my understanding with Deno and such for instance is that there is no easy way to test a Node package before it is released and consumed by esm.sh
. For complete testing this may require me to release a non-announced version of the lib under my own control so that it hits the ESM CDNs then I can test on Deno and such. I'd then remove that package from NPM. Taking this approach will at least reduce the churn of pumping out extra beta releases of gl-matrix
itself.
Now that we are discussing CDN bundles and such I'd like to bring up the possibility of switching to Rollup for the library bundling. I specifically didn't want to change the build process much that you put in place with beta.1
/ beta.2
. With Rollup though it will be possible to have one configuration file to handle everything from building the library to including the CJS builds and extra CDN bundles without the need for a separate script. Likewise esm-d-ts
my tool that creates the bundled type declarations can be used from Rollup, so it really would be a single build file. Run Rollup and it just goes.
So potential courses of action that I'd like your input on before I start any work:
Create ESM CDN bundles
classic
remapped API / the 64-bit version will not.package.json
for main
, types
, and various ESM CDN fields that are known.Create UMD bundles.
Test in Deno & Bun. The large part of the JS world does not use either FYI, but for completeness sake.
Possibly switch to Rollup for building library. See section above.
Update README to give CDN instructions and a Deno example.
Add ESLint for code formatting errors.
Add .editorconfig for automatic IDE configuration.
Possibly add Playwright and headless browser tests for CDN bundles.
That is a fair amount of action items above. I'd of course like to find a happy medium where I can make a PR that will be accepted. A fair amount of the tasks above are additional hardening beyond the current NodeNext
refactor. I'd be glad to be involved in everything that it takes to get a final production release, but finding that merge point too and not have it go on too long is ideal as well. IE I'd like to do more follow up through additional PRs rather than one mega one.
I suppose the above is a bit TL;DR... The initial work that I posted about was the Node package overhaul itself. The remaining work of coming up with some tidy all-inclusive pre-built distributions of the library is completed. So @toji pick your poison so to speak from dist-cdn for pre-built direct in browser use cases. There are ESM 2016 / 2022 options w/ the 2016 version transpiled. There is also a fully functional ES5 / UMD version. I've fully tested them manually including testing the UMD version in RequireJS (whoa haven't used that in about ~10 years). No real need to add Playwright for testing the CDN bundles. I did have to get Rollup
involved in making the UMD CDN bundles as esbuild
doesn't support UMD. You can see how that is accomplished here.
Tomorrow I'm finishing up some really neat additional documentation of the CDN bundles that is co-hosted with the main Node distribution API docs. There are full types for the CDN bundles and package.json
should be configured correctly for ESM CDNs to pick up the right bundle which means Deno
should work, but remains to be tested as that requires releasing the package. After the CDN docs are done the only remaining maintenance task that is a nice to have is to add ESLint / .editorconfig
to the party. So probably tomorrow or Tuesday for 100% completion.
Just dropping this here for the UMD / RequireJS proof of life (the long tail is supported):
Sorry if I'm having a bit of trouble keeping up. Been busy with other things. Also I'm not going to be available for reviews for most of the upcoming week, so I promise that lack of feedback isn't due to anything on your end. š
Updated docs are much easier to parse, thanks! Swizzles still take up a ton of page space, but at least their far easier to navigate around now. I do think adding @category to all the methods like you mentioned is probably the right thing to do long term, but that can easily be handled in a follow-up!
I'll say up front that if UDM can be supported with minimal effort then that's great, but I'm actually not too concerned about supporting it if it proves to be a maintenance burden or imposes design decisions or dependencies we'd rather not have. I don't think it's used much in modern JS development and would rather focus on making the more common, recommended paths work well.
I do view Deno and Bun support as higher priority since they're becoming more popular, but my understanding is that it should take minimal to no changes vs. node support? Fingers crossed that that's the case. š¤
In respect to the bundling, I think it's a good idea to have the various bundles available, but I'm also not sure if I understand why the separate files can't be made compatible with both browsers and node? Specifically, I don't think I understand the benefit you mentioned with the absolute paths when it comes to using the 64 bit vs 32 bit version of the lib? From looking at the folder structure it seems like both of them could access the common files by using the same relative path, since they reside in the same relative depth in the directory tree.
Finally, a couple of quick notes:
Sorry I don't have time for more in-depth feedback, but again I'm really impressed with this effort and think it should be ready to merge in as beta.3 after a couple of relatively minor details are sorted out! Thank you!
Sorry if I'm having a bit of trouble keeping up. Been busy with other things. Also I'm not going to be available for reviews for most of the upcoming week, so I promise that lack of feedback isn't due to anything on your end. š
No worries. I spent the last week finishing the "2nd half" / CDN Bundle configuration, optimized the build process, improved the API doc support, and got ESLint v9 and latest TS ESLint support all hooked up and source in line with it.
Here is the latest video covering the work in the last week that provides clarity / answers questions: https://www.youtube.com/watch?v=y0J_RkTfrag
API docs are updated. There is placeholder information in the CDN README that I'd be glad to update after beta.3
drops and it can be flushed out with full examples of using ESM CDNs / Deno setup, etc.
Main fork repo: https://github.com/typhonjs-svelte-scratch/gl-matrix-beta2/tree/glmatrix-next
Updated docs are much easier to parse, thanks! Swizzles still take up a ton of page space, but at least their far easier to navigate around now. I do think adding @category to all the methods like you mentioned is probably the right thing to do long term, but that can easily be handled in a follow-up!
I already got the @category
aspect sorted. So in the next week or so I'll be updating typedoc-pkg
to use TypeDoc 0.26.x
. There are two new features of 0.26
that will make things nicer. When using categories they are now going to be foldable / collapsible in the main body of a particular page. Also there is a new feature to be able to host additional markdown files that will appear in the left-hand side navigation allowing more in depth API usage / tutorials, etc. to be broken out from the main README. I should have this done around the time you get to the code review.
I'll say up front that if UDM can be supported with minimal effort then that's great, but I'm actually not too concerned about supporting it if it proves to be a maintenance burden or imposes design decisions or dependencies we'd rather not have.
It's all good and handled including type declaration support even for the IIFE / global use case.
I don't think it's used much in modern JS development and would rather focus on making the more common, recommended paths work well.
Re: UMD, indeed probably not used much at all these days. I did make a point of highlighting import maps
and make sure everything is good to go for the most modern way of consuming ESM via ESM CDNs. The video highlights this use case.
I do view Deno and Bun support as higher priority since they're becoming more popular, but my understanding is that it should take minimal to no changes vs. node support? Fingers crossed that that's the case. š¤
I think Bun will be more like Node in terms of consuming NPM packages. I did review the Deno documentation more and somewhat recently they did add the capability to load packages from NPM, but ESM CDNs remain the recommended way to work with Deno. Things should be covered for this use case and I'll look into testing everything after a beta.3
release of gl-matrix
.
In respect to the bundling, I think it's a good idea to have the various bundles available, but I'm also not sure if I understand why the separate files can't be made compatible with both browsers and node? Specifically, I don't think I understand the benefit you mentioned with the absolute paths when it comes to using the 64 bit vs 32 bit version of the lib? From looking at the folder structure it seems like both of them could access the common files by using the same relative path, since they reside in the same relative depth in the directory tree.
In the build optimization work which is covered in the video the Node / NPM distribution is now fully bundled via ESBuild. This provides clarity that you can't copy individual files and run them raw on your local web server. If you use import maps
it is possible, but if you are doing that using an ESM CDN is best. I also show a very easy way for the scant few folks who want to create a custom gl-matrix
build to run on a web server that can be added to the API docs / consumption use cases. Check the video here: Custom Bundles.
Finally, a couple of quick notes:
- Yes, ESLint is a good idea.
- Wasn't aware of .editorconfig, but sounds nice!
I got the latest ESLint v9 + latest typescript-eslint support setup and the source & tests fully covered. typescript-eslint
was nice to highlight further improvements to the TS code.
Sorry I don't have time for more in-depth feedback, but again I'm really impressed with this effort and think it should be ready to merge in as beta.3 after a couple of relatively minor details are sorted out! Thank you!
No worries again. In many ways the work I did rewinds accrued technical debt and prepares gl-matrix
for modern distribution on Node and well beyond, so things should be good for the next 5+ years regarding distribution. I'd be glad to give the Deno / Bun verification aspects a solid run through once beta.3
drops.
The next maintenance task that could be tackled is getting the source code coverage in tests up to 100%. When I loop around to update TypeDoc support to 0.26
next week I'll take a couple of hours of knocking out low hanging fruit there. I think it'd take no more than a day to get this sorted especially with the Vitest UI involved.
But otherwise I do need wrap things up as from my perspective things are in a very sound production grade releasable state.
Maybe we can use this approach to remove those empty alias funtions in class defination:
export class Vec2 extends Float32Array {
// ...
static sub = this.subtract
static mul = this.multiply
static div = this.divide
static dist = this.distance
static sqrDist = this.squaredDistance
static mag = this.magnitude
static length = this.magnitude
static len = this.magnitude
static sqrLen = this.squaredLength
}
export interface Vec2 {
sub: typeof Vec2.prototype.subtract
mul: typeof Vec2.prototype.multiply
div: typeof Vec2.prototype.divide
dist: typeof Vec2.prototype.distance
sqrDist: typeof Vec2.prototype.squaredDistance
}
;(
[
['sub', 'subtract'],
['mul', 'multiply'],
['div', 'divide'],
['dist', 'distance'],
['sqrDist', 'squaredDistance']
] as const
).forEach(v => (Vec2.prototype[v[0]] = Vec2.prototype[v[1]] as any))
I also think it would be better if the return type of static method like copy
can be extact Vec2
rather than Vec2Like
when the out
paramter is Vec2
:
static copy<T extends Vec2 | Vec2Like>(out: T, a: Readonly<Vec2Like>) {
out[0] = a[0]
out[1] = a[1]
return out
}
const x = Vec2.copy([0, 0], [1, 2]) // x: [number, number]
const y = Vec2.copy(Vec2.create(), [1, 2]) // y: Vec2
y.normalize()
I also think it would be better if the return type of static method like
copy
can be extactVec2
rather thanVec2Like
when theout
paramter isVec2
:
This seems like a reasonable change for the methods that it applies to. Before I make such a change to beta 3
it would be great to get input from @toji.
@toji et al,
Alright.. I didn't disappear. After a short summer OSS break I set myself out to thoroughly update my TypeDoc theme. IE the Default Modern Theme (DMT). It's a theme augmentation that takes the generated output of the default TypeDoc theme and significantly improves features, UX, and accessibility. Just like my hope / proposed beta.3
for gl-matrix
I am in discussion with the TypeDoc maintainer to mainline the DMT, so everyone can benefit that uses TypeDoc.
TypeDoc 0.26.x
added some new features that are useful, but IMHO they aren't particularly complete or fully baked from a UX perspective. It took me the last 5 weeks full time to get the DMT updated, but I significantly improved these features and more. The two features that are most useful for gl-matrix
are the ability to add additional Markdown files to the API docs and folding of doc content sections. Both of these features the DMT improves. The content folding is particularly relevant to gl-matrix
and something that Toji was concerned about especially with the swizzle accessors being visible in the docs.
Please see this video to get a side by side comparison between the DMT (on the left) and the default TypeDoc experience (on the right): https://www.youtube.com/watch?v=MFJW6FxfvCg
I strongly believe that a package like gl-matrix
needs A+ API docs. With the work I did over the summer on package re-organization / maintenance and the API docs progress I'd really like to think things can be greenlighted soon and a beta.3
release made. I'd be more than happy to follow up on any of the CDN details and flush out that side of the docs too.
You can view a live version of the API docs here: https://typhonjs-svelte-scratch.github.io/gl-matrix-beta3-dist/index.html
Do check out the previous post above as you can try out the proposed beta.3
release now.
For those interested in producing similar API docs like I have for gl-matrix
all you really need is typedoc-pkg
as it combines everything into one zero / low config CLI; all of my TypeDoc related packages are here:
I just updated ts-lib-docs
for complete coverage of all built-in TS 5.6.2
lib declarations as well.
glMatrix was a project I started 12 years ago(!), because at the time there weren't too many good options for doing the type of matrix math realtime 3D apps required in JavaScript. I never thought that anyone outside of myself and a few early WebGL developers would use it, and certainly didn't anticipate it becoming as popular as it did.
Fortunately for everyone, the landscape for 3D on the web looks a lot different today than it did over a decade ago! There's a lot of great libraries that offer comprehensive tools for creating realtime 3D web apps, usually with their own very competent vector and matrix capabilities built in. Many of these offer features or syntax niceties that glMatrix hasn't been able to match due to it's history and design ethos.
The web itself has also evolved in that time. When I published the first version of glMatrix Chrome was on version 5, Firefox was at version 3, and Internet Explorer was still a thing developers cared about. Node.js and TypeScript hadn't even been released! We've made astronomical strides in terms of the capabilities of the Web platform since then. For example, none of the following existed (or at the very least were widespread) at the time glMatrix was first developed:
let
andconst
(x) => { return x; }
)someFunction(...args);
)Float32Array
wasn't around until shortly after!Over the years glMatrix has been updated in small ways to take advantage of some of these things, it hasn't strayed too far from it's original design. But these days we can do so much better, and despite the excellent competition that I strongly believe there's still a need for a solid, standalone vector and matrix math library.
I've had a bunch of ideas floating around in my head for how glMatrix could be updated to take advantage of the modern web platform, but haven't had an opportunity to work on it till recently. (Let's be honest, the library has needed some maintenence for a while now.) Now that I've had a chance to try some of it out, though, I feel pretty confident that it's a viable direction for the future of the API.
So let me walk you through my plans for a glMatrix 4.0! Feedback highly appreciated!
Backwards compatibility first and foremost
glMatrix has a lot of users, and they have a lot of carefully written algorithms using the library. It would be unrealistic to expect them to do complete re-writes of their code base just to take advantage of a nicer code pattern. So the first principle of any update is that backwards compatibility is always priority number one.
This doesn't mean that EVERYTHING is carried forward, mind you. I think it's appropriate for some functionality to be labeled as deprecated and for some lesser used, fairly awkward bit of the library to be dropped. (I'm looking at you, weird
forEach
experiment that never quite worked the way I wanted.)But the majority of the library should be able to be used with minimal or no changes to existing code, and new features should cleanly layer on top of that existing code rather than requiring developers to make a wholesale switch from the "old way" to the "new way".
Lots of ease-of-use improvements
glMatrix was designed for efficiency, but that left a lot to be desired in terms of ease-of-use. There's only so much that JavaScript allows in terms of cleaning up the syntax, but with some slightly unorthodox tricks and taking advantage of modern language features we can improve things quite a bit.
Constructors
The current syntax for creating vector and matrix objects isn't ideal (I'll be using
vec3
for examples, but everything here applied to each type in the library):We'd much rather use the familiar JavaScript
new
operator. Turns out we can without losing any backwards compatibility simply by declaring our class to extendFloat32Array
!This allows us to use a few variants of a typical constructor.
It's pretty flexible, and not that complex to implement! And of course because
Vec3
is still aFloat32Array
under the hood, you can pass it into WebGL/WebGPU (or any other API that expects Typed arrays or array-like objects) with no conversion.Static methods
For backwards compatibility we'll keep around
vec3.create()
and friends, but have them return instances of the newVec3
class instead of raw typed arrays. In order to keep everything together, they'll become static methods on theVec3
class. Same goes for every other existing method for a given type.Used as:
As a minor design aside, I felt pretty strongly that as a class the type names should begin with an uppercase, but that does break backwards compat since in the original library all the "namespaces" were lowercase. This can be resolved by having the library defined a simple alias:
Which then allows you to import whichever casing you need for your app, and even mix and match.
I would probably encourage migration to the uppercase variant over time, though.
Instance methods
Once we have a proper class backing out vectors and matrices, we can make many of the methods for those types instance methods, which operate explicitly on the
this
object.Turns out that this doesn't conflict with the static methods of the same name on the same class! And it makes the syntax for common operations much easier to type and read:
Actually there's two ways of going about this. One is that you implicitly make every operation on a vector apply to the vector itself, as shown above. The other is that you have each operation return a new instance of the vector with the result, leaving the operands unchanged. I feel pretty strongly that the former fits the ethos of glMatrix better by not creating constantly creating new objects unless it's necessary.
If you don't want to alter the values of the original object, there's still reasonably easy options that make it more explicit what you're doing and where new memory is being allocated.
And, of course you can mix and match with the older function style too, which is still handy for applying the result of two different operands to a third value or simply for migrating code piecemeal over time.
Attributes
Vectors being a real class means we can also offer a better way to access the components, because lets face it: typing
v[0]
instead ofv.x
is really annoying. Getters and setters to the rescue!Now we can choose to reference components by either index or name:
All of method implementations internally will continue to lookup components by index both because it's a bit faster and because it allows for raw arrays to be passed in as temporary vectors and matrices, which is convenient.
Swizzles!
And hey, while we're adding accessors, why not borrow one of my favorite bits of shader syntax and add swizzle operators too!
Swizzles can operate between vector sizes as well. In practice it looks like this:
(These do break the "don't allocate lots of new objects rule a bit, but as a convenience I think it's worth it.)
Operator overloading
... is what I WISH I could implement here. Got your hopes up for a second, didn't I?
But no, JavaScript still stubbornly refuses to give us access to this particular tool because some people shot themselves in the foot with C++ once I guess? Check back in another decade, maybe?
TypeScript
I'm sure this will be mildly controversial, but I'm also leaning very strongly towards implementing the next version of glMatrix in TypeScript. There's a few reasons for this, not the least of which is that I've been using it much more myself lately as part of my job, and my understanding is that it's becoming far more common across the industry. It also helps spot implementation bugs, offers better autocomplete functionality in various IDEs, and I feel like the tooling that I've seen around things like documentation generation is a bit better.
As a result, having the library implemented natively in TypeScript feels like a natural step, especially considering that it doesn't prevent use in vanilla JavaScript. We'll be building a few different variants of the distributable files regardless.
Older Browser/Runtime compatibility
While I do feel very strongly about backwards compatibility of the library, that doesn't extend to supporting outdated browsers or runtimes. As a result while I'm not planning on doing anything to explicitly break it, I'm also not going to put any effort into supporting older browsers like IE 11 or fairly old versions of Node. Exactly where the cutoff line will land I'm not sure, it'll just depend on which versions support the features that we're utilizing.
ES Modules-first approach
glMatrix has used ES Modules as it's method of tying together multiple source files for a while, and I don't intend to change that. What I am curious about is how much demand there is out there for continuing to distribute other module types, such as CommonJS or AMD modules.
One thing I am fairly reluctant to continue supporting is defining all the library symbols on the global object (like
window
), since the library itself is already reasonably large in it's current form and the above changes will only make it larger.Which brings me to a less exciting topic:
Caveats
All of the above features are great, and I'm sure that they'll be a net win for pretty much everybody, but they come with a few caveats that are worth being aware of.
File sizes will be larger
The addition of all the instance methods on top of the existing static methods, not to mention the large number of swizzle operations, would result is the library growing in size by a fair amount. I don't have numbers just yet, but I'd guess that the total size of glMatrix's distributable files growing by about 1/3 to 1/2 is in the right ballpark.
Obviously one aspect of building a well performing web app is keeping the download size down, and I don't want to adversely affect that just for the sake of adding some conveniences to the library.
I also, however, expect that most any developers that are truly size-conscious are already using a tree shaking, minifying build tool that will strip away any code that you're not actively accessing.
To that end, glMatrix's priority would be to avoid doing anything that would interfere with effective tree shaking rather than to try and reduce the base library size by only implementing a bare bones feature set.
length
is complicatedOne natural attribute that you'd want on a vector alongside the
x
,y
,z
, andw
attributes is alength
that gives the length of the vector itself, something that's already computed byvec{2|3|4}.length(v);
Unfortunately,
length
is already an attribute ofFloat32Array
, and gives (as one would expect) the number of elements in the array.We don't want to override
length
in our vector classes, since that would give nonsensical results in contexts where the object is being used as aFloat32Array
, which means that while we can retain the staticlength()
function for backwards compat we'll need an alternative for the vector instances. I landed onmagnitude
(with an shorter alias ofmag
) as the replacement term, though I personally know I'm going to goof that one up at least once, and spend more time than I should wondering why the "length" of my vector is always 3. š®āšØVector/Matrix creation time will be slightly worse
This is a big one, as one of the most frequent criticisms leveled at glMatrix is that the creation of vectors and matrices is expensive compared to other similar libraries. This is because (for reasons that honestly escape me) creating
TypedArray
buffers or views is a slower operation in most JavaScript environments than creating a JavaScript object with the same number of elements/members.My overall response to this has been that for many apps you'll eventually want to convert the results to a
Float32Array
anyway for use with one of the graphics APIs, and that avoiding creating a lot of temporary objects is a good practice for performance anyway, regardless of the relative cost of creating those objects. Both of those principles are still true, but it doesn't change the fact that this aspect of glMatrix is simply slower than some competing libraries.The above proposals will not improve that situation, and in fact are likely to make it a bit worse. Having some extra logic in the extended classes constructor before passing through to the
super()
constructor will unavoidably add some overhead for the sake of providing a much nicer, more flexible syntax for developers.If I were starting fresh I may very well take a different approach, but as I said at the top of this post backwards compat is very important to me for a library like this, so this is an area where I'm willing to accept this as a relative weakness of the library to be weighed against what I consider to be it's many strengths. Your milage may vary.
Accessors/instance methods will have overhead
As nice as it is to be able to access the object components by name, using the getter
v.x
is likely to always be a bit slower than accessingv[0]
directly. Similarly some of the instance methods are likely to just pass through to the equivalent static method, especially in cases that would otherwise involve a significant amount of code duplication. For example:While I'm not necessarily a fan of adding functionality that's known to be less efficient than it could be, in this case I think that the aesthetic/usability benefits are worthwhile. And it's worth considering that there are plenty of times where the clarity of the piece of code will be more valuable than ensuring it uses every clock cycle to it's maximum potential. (I mean, lets be realistic: We're talking about JavaScript here. "Perfectly optimal" was never in the cards to begin with.)
I am happy knowing that in cases where the difference in overhead has a material impact on an application's performance, the conversion from accessors to indices, or to calling the static version of functions directly can be made painlessly and in isolation.
Preview
The code snippets in this post are all pretty simplistic, but I've put a fair amount of effort into validating this approach already, and currently have a WIP version of a potential glMatrix 4.0 available to look through in the
glmatrix-next
branch of this repo. It is definitely not in a generally useable state at this point, but looking at thevec2.ts
,vec3.ts
, andmat4.ts
files should give you a good idea of how things are likely to look when everything is done.Most of my efforts so far have gone into ensuring that things can work the way that I wanted, toying with file and directly structure, ensuring that the generated docs are clear and useful, and learning far more than I anticipated about TypeScript's quirks. But now that I'm satisfied that it's possible I wanted to gather some feedback from users of the library before pushing forward with the remainder of the implementation, which will probably be largely mechanical, boring, and time consuming. I'll likely take a week off work at some point to finish it up.
Thank you!
Thank you to everyone who has made use of glMatrix over the last 12 years! It's been incredible and humbling to see all the amazing work that it's been part of. And an even bigger thank you to everyone who has contributed to or helped maintain the library, even when I personally haven't had the time to do so!