oframe / ogl

Minimal WebGL Library
https://oframe.github.io/ogl/examples
3.71k stars 210 forks source link

glTF integration? #39

Closed arifd closed 4 years ago

arifd commented 4 years ago

Hello, is anyone working on this? Or can point me to something I can use that works well with ogl? (great library btw! I stopped writing my own because yours is so well written!)

I have found this: https://github.com/KhronosGroup/glTF-Tutorials/blob/master/gltfTutorial/README.md

And this video at first glance, seems to go through the concepts pretty well: https://www.youtube.com/watch?v=cWo-sghCp8Y

I found this: https://github.com/wnayes/glTF-js-utils, ~but honestly, not really sure what it's trying to do~, Okay it's for exporting glTF

This may really speed up the process: https://www.patreon.com/posts/28395927

gordonnl commented 4 years ago

Hey Arif,

Thanks a lot for those resources - especially that slimmed down version! Every time I sit down to go through the spec I get overwhelmed - it's a tad more involved than my own geometry format haha.

There is not currently any GLTF > OGL implementation I'm aware of. I generally use a custom JSON format, and have written a custom OBJ importer (geometry only, not yet added to the framework), but adding GLTF support is top of my todo list.

I think supporting geometry, scene hierarchy and then skinning/animation would be a great start.

In order to support materials/lighting, there would need to be an enormous amount of work (and code) added to the framework, and I'm not sure how best to tackle it...

arifd commented 4 years ago

I DID IT! :D (well, thank @sketchpunk, really!)

I have a working barebones prototype here: demo: https://arifd.github.io/gltf-ogl/ code: https://github.com/Arifd/gltf-ogl

Nathan, this may sound stupid but I have loafed around the realm of 'terrible programmer for over a decade now. The last couple of years however, something clicked, and it started making sense to me. Now I actually want to become 'ok' at it!

I have a bit of free time at the moment. If you want, I have the drive to develop this into something highly usable for you, but I'm gonna need a lot of guidance because as amateur as I am at programming, I'm even more so in the world of 3D.

gordonnl commented 4 years ago

Nice one man! This is a great start!

That's great that things are making sense to you, the web is definitely a confusing platform to develop for. On that note, I'm always trying to simplify the code I write to try and maximise its readability - I sincerely hope that one can get the grasp of how the classes operate without too much confusion. WebGL/OpenGL gets pretty complex though, so I think it's unavoidable at some point.

Moving forward with this GLTFLoader module, I think a great start would be to support just the meshes array (what I'd call 'geometry'). This avoids us dealing with materials/cameras/animation/etc for the moment.

I'm not 100% on the API yet without testing, but I think it should work so that a user can use it like: const gltf = await GLTFLoader.load('path/to/src.gltf'); And that would return an object holding the different content. At the moment that would just be a gltf.geometries array, made up of objects with a name and an OGL Geometry attached.

We should also look into supporting the .glb format (a binary wrapper for the .gltf and .bin files, along with textures).

I also think the Threejs GLTF loader will be a great reference moving forward too.

gordonnl commented 4 years ago

Here is a great resource for testing the implementation https://github.com/KhronosGroup/glTF-Sample-Models. I don't think we should support GLTF v1.0, just v2.0.

arifd commented 4 years ago

Thanks Nathan, I'm on it. I think I'm close-ish to having something pretty basic already :)

right now, I'm just building it to load one mesh in one scene. Scratching my head a bit on the parsing because the spec is designed in a way that jumps around the json file a ton, but I think once I got my head around it, I'll post it here.

arifd commented 4 years ago

Btw, what's the best way to share code between us?

gordonnl commented 4 years ago

I haven't tested, but reading here, it looks like when you make a Pull Request, if you can make sure to check the Allow edits from maintainers checkbox, I should be able to commit to your branch.

arifd commented 4 years ago

Okay, I need to learn up about Typed Arrays, Blobs (I think), and DataView and the JS Binary API so I can actually pull in the binary data.

(links for myself...) https://www.youtube.com/watch?v=4ba0G8FQt5M https://www.youtube.com/watch?v=UYkJaW3pmj0

Will begin that quest tomorrow.

gordonnl commented 4 years ago

For the binary buffer data stuff, it looks like the example you made handles it already in the parseAccessor() method.

It handles non-interleaved and interleaved buffers (by converting them to non-interleaved), however doesn't handle sparse accessors.

I'm wondering whether it would be worth while supporting interleaved buffers as well... At the moment, OGL expects a separate buffer for each attribute. Then we wouldn't need to use DataView, just merely convert the binary buffer to a TypedArray.

sketchpunk commented 4 years ago

Hello !! Been seeing this convo coming into my email all weekend but I haven't had time to sit down and join the fun. So I can probably just do a big data dump.

The lib your using seems to be an older version, if you want to use the latest and greatest here the link. Currently I treat gltf in 3 parts. First is just Gltf.js, which contains the bare bones code needed to pull out meshes, skeletons and animations in some basic application independent format of some kind. Second, I create a util class that uses that data to convert it into something that can be used by the engine. The first link I have the Fungi Util, I create Raw Buffers and Vaos personally. Then I have a ThreeJs one that uses their buffer, geometry and mesh classes to generate something that renders in that system. So thats a possible way for you guys to implement GLTF import for ogl. 3rd part is just a GLTF Exporter. That one has the least amount of polish since its kinda of new and haven't rewritten it for fungi v5 yet. But its a good example of how to build binary + json file in js then download it.

Main Lib : https://github.com/sketchpunk/temp/blob/master/Fungi_v5/fungi/lib/Gltf.js Fungi Use of Lib : https://github.com/sketchpunk/temp/blob/master/Fungi_v5/fungi/lib/GltfUtil.js ThreeJs Use of lib : https://github.com/sketchpunk/temp/blob/master/Fungi_v5/fungi.3js/lib/GltfUtil.js Exporter : https://github.com/sketchpunk/Fungi/blob/master/fungi.misc/GltfExport.js

You guys mentioned that in my original lib that I create TypeArrays from the bin file. If your using WebGL 2.0, you can actually skip that step. In my gltf.get_mesh function I have an argument called spec_only. This returns a struct of information about the buffers, like for vertices, get the starting byte index in the bin file it starts, the byte length, component length, what type of data, etc. With that info plus just one DataView wrapper around the Bin( ArrayBuffer ) you can load the data right into the gpu straight from the bin file.

https://github.com/sketchpunk/temp/blob/master/Fungi_v5/fungi/core/Mesh.js#L111 https://github.com/sketchpunk/temp/blob/master/Fungi_v5/fungi/core/Buf.js#L60 https://github.com/sketchpunk/temp/blob/master/Fungi_v5/fungi/core/Buf.js#L86

On a side note, When it comes to Type Arrays. All you need to know is that TypeArrays are just Specialized Versions of DataView. TypeArrays and DataView are wrappers around ArrayBuffers. You can look at Array buffers as being Static Length Byte Array. AB cant really be read or written to directly. You can wrap it with a DataView that lets you R/W to it as any type, be it float, uint, etc. To make life super easy, you can use a TypeArray to wrap the AB to make using it like any other array. Things to keep in mind, when I create a a type array from non-interleaved data from the Bin file, it wraps around the bin file. For example, if I where to create a float32array and DataView, If I use the DV to modify the data where Float32Array points too, the Float array will also reflect the changed data. TypeArrays are "Slices" or "Pointers" to an ArrayBuffer. Heck, if you do a test for Float32Array.buffer === DataView.buffer, js will say true, both objects point to the same byte array. Thats important to know if you want to work on an Exporter :)

There was mention of glb imports. So far I never bothered because there are 2 important benefits to using json+bin. First, by having the json file, you can manually change thing ( i often modify the name of bones ) plus you can extend the format. GLTF is very... um... not perfect and designed for just one use case in my opinion. So recently I started putting extra bits into the file. One such thing is to store poses. https://github.com/sketchpunk/temp/blob/master/Fungi_v5/files/models/vegeta.gltf#L1181

You can throw in extra root nodes into the file without any worries about breaking it for other importers. I'm doing a lot of IK stuff, so I need a T-Pose for that stuff, so I extend the json file to include that data manually. The only other solution is to create a single frame animation to hold it, but then your stuck dealing with loading animation data then parsing it out into something usable for a pose which isn't ideal. Exporting Meshes with Animations from blender also comes with some headaches, hopefully they've fixed some of the bug reports I submitted :/

The only good thing about JSON+Bin is that you can throw away the json file and just keep the bin file. Why?? well, i mentioned before I have an option for spec_only. I can create a new text file and store that sort of information in it and use that to read my meshes out of the bin file. You might ask, why would anyone do that? Well, GLTF is meant as an open source replacement for FBX, a way to transfer 3D data from one system to another. That has alot of benefits EXCEPT security. If I wanted to protect my meshes from being stolen OR worse yet, I buy meshes from people and use GLTF to import them into my web game, it would be super easy for people to steal that model and use it any way they want, its an open format. The owner of that model would not be very happy with you if their stuff got pirated all over the web. So by just creating your own way to store the definitions of the buffers but use the bin file as the raw data to pull from, you make it much harder to pirate. Its not full proof by any means, but now someone has to put the extra effort to create their own importer or converts to turn ur custom data into something that can at least be used in blender so they can export it back out as a gltf file.

So bla bla bla. Enough dump.

I've spent a great deal of time parsing data out of gltf, so if you need any help or have questions I'll gladly lend a hand.

gordonnl commented 4 years ago

Hey mate, thanks so much for your input! Those resources are going to be super handy.

Loading the bin straight to the GPU sounds like the ultimate goal to me - I'll definitely aim for that.

You make some solid cases for gltf+bin vs glb, thanks. Personally I love deeper-grain control over my meshes, but I know there usually comes a point where there is a hand-off to 3D folk that just want to drag/drop to test and iterate, and glb seems better suited there (especially for textures). For now I'll be more than happy with just the core gltf support though!

Thanks again!

sketchpunk commented 4 years ago

np. if you need help with glb, I don't mind helping. Would be nice extra bits to add to gltf.js

sketchpunk commented 4 years ago

https://github.com/KhronosGroup/glTF/tree/master/specification/2.0#glb-file-format-specification   Looking at the spec, it looks pretty straight forward. First 12 bytes for magic,

Next 12 bytes starts the first chunk which describes how to parse out the json. Use the length value from those 12 byte data to get the length of bytes, then create a uint8array out of part of the array buffer, use TextDecoder in javascript to convert the UTF8 byte array into a string. So from there, you got the gltf file portion of things

Can use the json byte length to get the start of the bin chunk. At this point not sure what to do. Sounds like the JSON file does not do any extra byte offsets for buffers, so to make sure things align up correctly is to try to create a new arraybuffer based on a slice of the glb binary of just the chunk related to the "BIN" file. If we do that, we'll have what you would normally have with json+bin.

From there, just use the gtlf.js like glb never existed.

arifd commented 4 years ago

Hey Guys,

I examined and refactored Vor's (sketchpunk) code to fit in with OGL, as per Nathan's suggestions.

demo: https://arifd.github.io/gltf-ogl/ code: https://github.com/Arifd/gltf-ogl/blob/master/GLTFLoader.js [_edit: ~aww, it messed up all my nice whitespace formatting! :(~ fixed by converting tabs to spaces ]

Honestly the naming might be a little bit off, since I'm not super clear on OpenGL terminology, i.e. what exactly is a primitive, and nodes and children, and is getMesh really the correct name for the function, etc. And as Nathan pointed out, he's not exactly clear on the API just yet. For now GLTFLoader.load is just returning a single OGL Geometry object

Please have a look and tell me where it needs to go, and I'll gladly continue to work on it.

sketchpunk commented 4 years ago

The section around "Locate the bin file" should be rewritten. You don't want to create a blank ArrayBuffer. When fetch gets the file, it creates one for the results of the download. So you ended up allocating memory that won't be used. Plus, you are not handling some of the errors, just outputting a console message. Plus you want to check first if something undefined before using it.

`let buf = gltf.buffers[0];

if ( !buf.uri || buf.uri.includes("data:") ){ console.error("GLTFLoader: Currently only supports separate gltf and bin files"); return null; }

let bin = await this.loadArrayBuffer( buf.uri );`

Nodes are usually just chunks of data that has some sort of relationship. In the context of GLTF, they can be meshes, bones, lights, etc and the relationship is Parent-Child.

In GLTF, a Primitive is a collection of Buffers ( Verts, Normals, UVs, Joints, etc ). Ideally you can think of it as just a single mesh if you where using blender. A Mesh in GTLF, is a collection of primitives that normally share the same Model space. There is many reasons why a Model is broken down into sub models, for example a car. A car mesh will contain 2 primitives, Body and Windows. The main reason for the separation is that you need to render each one with a different shader. But the two parts share the same origin point. Depending on the system your dealing with, you need to take these two geometries and keep them together in some way. One way is to say Car is the Parent Mesh, and you set the Window as a child mesh to it. In THREE.JS you have Groups, so in that case you can create a group then set car and windows as children to the group. So when you move the group, you move both meshes as if they where one model.

So thats why in getMesh I return an array because a Mesh might be subdivided into mini meshes. Each one might also have a name, if not, its best to give it some default name with an increment value.

Oh last thing. I see your checking for embedded buffer, you can decode it into an array buffer if you look at the more recent gtlf.js file I have. Heres a link directly to the function. https://github.com/sketchpunk/temp/blob/master/Fungi_v5/fungi/lib/Gltf.js#L476

gordonnl commented 4 years ago

Making progress! Thanks for your work @Arifd, and thanks for the super fast feedback @sketchpunk ;).

Meanwhile, I've got something working for interleaved buffers, gonna test it with this sample (the only interleaved sample I could find in the list, funnily enough).

I think loading one geometry was a great step 1, I think the next step is to load all of the meshes in the JSON, including each of their primitives. So basically calling GLTFLoader.load(...) could return something like:

{
  meshes: [
    [
      {
        geometry: [Geometry instance],
        mode: 4, // gl_triangles
        ... material etc eventually
      },
      ... more primitives per mesh
    ],
    ... more meshes
  ]
}

I'll be honest I don't really get the point of multiple primitives in a single mesh (why not just create more sibling meshes), and in this whole list I've only so far been able to find one example with multiple, morph primitives, which doesn't really help to explain the benefit to me. Basically, what the GLTF spec calls a primitive (one draw call with geometry and a material) is what OGL refers to as a Mesh. However, I still think the output of the loader should reflect the JSON format for clarity's sake, which results in that nested array output for meshes.

Personally, I'd probably start by parsing the buffers array in the JSON description, and loading all of the binaries (in both embedded or separate .bin file formats), keeping them in the same order, so that once they're all loaded we can just parse the meshes referencing the loaded buffers, knowing there are no more async hurdles to cross.

sketchpunk commented 4 years ago

I know the whole primitives / mesh things is a bit wonky but it kinda makes sense depending on the system your using and how far you've optimized your rendering.

In fungi, I use ECS to manage the data. An entity has a draw component that can hold an array of "meshes" that can be rendered. Then it has Node component that handles the transform. So if I where to use a car mesh, The body, windows, wheels, etc: whatever way the mesh is broken down for shader reasons, in my system I have a single transform managing like 3+ meshes. I Just have a single ModelMatrix I need to pass to the GPU, but I do it via UBO and not a Shader Uniform so I can have just one ModelMatrix gpu upload then run like dozens of shaders to render that one collection of meshes. Things are optimized to have no transform math and as little GPU calls.

If I where to do it in Three.JS, I have to treat each piece like sibling meshes, each with their own transforms. Now I have 3 transforms, plus a group transform, plus have to now handle parent-child transform hierarchy math to compute the World Space Matrix for each of the meshes. Now I have to do at least 3 GPU calls to push 3 ModelViews for rendering each geometry.

Once you start working with a lot of meshes that are broken down into pieces, you'd start to notice how wasteful a normal approach is. There really isn't a good reason to treat each primitive as a separate transform when you start looking to trim the fat.

I started that approach with Fungi V4, so it took me awhile to get there :)

gordonnl commented 4 years ago

Right, I get you thanks, so it's for optimising uniform sharing and transform calculations. For UBOs, I currently still feel the need to support WebGL1 (for Safari) and I haven't been able to come up with a smooth fallback. Everything else falls back pretty nicely (apart from standard derivatives, which need both GLSL1 and 3 version shaders).

Looking into the spec some more, I'm thinking that moving forward with the approach I described, after loading the binaries, we could iterate over the BufferViews and generate a GL Buffer for each. Then when we add our OGL geometry attribs, we could reference the GL Buffer, rather than passing in new data arrays for each, along with the stride and offset values for shared buffers (I get the feeling that's what the spec intended).

arifd commented 4 years ago

@sketchpunk,

The section around "Locate the bin file" should be rewritten. You don't want to create a blank ArrayBuffer. When fetch gets the file, it creates one for the results of the download. So you ended up allocating memory that won't be used. Plus, you are not handling some of the errors, just outputting a console message. Plus you want to check first if something undefined before using it.

`let buf = gltf.buffers[0];

if ( !buf.uri || buf.uri.includes("data:") ){ console.error("GLTFLoader: Currently only supports separate gltf and bin files"); return null; }

let bin = await this.loadArrayBuffer( buf.uri );`

Agreed and changed.

Nodes are usually just chunks of data that has some sort of relationship. In the context of GLTF, they can be meshes, bones, lights, etc and the relationship is Parent-Child.

In GLTF, a Primitive is a collection of Buffers ( Verts, Normals, UVs, Joints, etc ). Ideally you can think of it as just a single mesh if you where using blender. A Mesh in GTLF, is a collection of primitives that normally share the same Model space. There is many reasons why a Model is broken down into sub models, for example a car. A car mesh will contain 2 primitives, Body and Windows. The main reason for the separation is that you need to render each one with a different shader. But the two parts share the same origin point. Depending on the system your dealing with, you need to take these two geometries and keep them together in some way. One way is to say Car is the Parent Mesh, and you set the Window as a child mesh to it. In THREE.JS you have Groups, so in that case you can create a group then set car and windows as children to the group. So when you move the group, you move both meshes as if they where one model.

So thats why in getMesh I return an array because a Mesh might be subdivided into mini meshes. Each one might also have a name, if not, its best to give it some default name with an increment value.

I think I got it. So Mesh: Car is Primitives: Body and Windows. But then later you say: a Mesh might be subdivided into mini meshes. Here you mean, 1 mesh may contain n primitives, (just like car > body, window) right?

Oh last thing. I see your checking for embedded buffer, you can decode it into an array buffer if you look at the more recent gtlf.js file I have. Heres a link directly to the function. https://github.com/sketchpunk/temp/blob/master/Fungi_v5/fungi/lib/Gltf.js#L476

Thank you! I will add feature later.

@gordonnl,

I think loading one geometry was a great step 1, I think the next step is to load all of the meshes in the JSON, including each of their primitives. So basically calling GLTFLoader.load(...) could return something like:

Done! :) And cleaned up some bits, and fixed the whitespace issue (converting tabs to spaces)

demo: https://arifd.github.io/gltf-ogl/ code: https://github.com/Arifd/gltf-ogl/blob/master/GLTFLoader.js

I've restructured the code to do this. But It now seems awfully verbose, If I understand correctly. So you want GLTFLoader.load to return an object with meshes which is an array of an array of primitives, each one containing an OGL Geometry object. As such, to load in a single object into a mesh, here is the code:

const whatever = await GLTFLoader.load(gl, 'whatever.gltf');
let something = new Mesh(gl, {geometry: whatever.meshes[0][0].geometry, program})

Additionally, there's also a lot of looping going on now between .load and .getMesh, but I guess that will be optimized away once we know better how we want to interact with the gltf format, as I'm guessing the following two quotes are dealing with:

Personally, I'd probably start by parsing the buffers array in the JSON description, and loading all of the binaries (in both embedded or separate .bin file formats), keeping them in the same order, so that once they're all loaded we can just parse the meshes referencing the loaded buffers, knowing there are no more async hurdles to cross.

Looking into the spec some more, I'm thinking that moving forward with the approach I described, after loading the binaries, we could iterate over the BufferViews and generate a GL Buffer for each. Then when we add our OGL geometry attribs, we could reference the GL Buffer, rather than passing in new data arrays for each, along with the stride and offset values for shared buffers (I get the feeling that's what the spec intended).

But these two I don't really understand, but that's okay. I'm gonna be of no use designing the API as I'm just not well versed enough in OpenGL to be able to contribute meaningfully in that sense, but I'll do the hard boring work of hammering down some code, if you let me know in caveman language what you need.

sketchpunk commented 4 years ago

i would advise against loading up all the BufferViews right into GL Buffers off the bat. There is other data stored in the binary that isn't suited for gl buffers like animations, bind poses, that I know off so far.

The Spec expects you to treat the file as a Scene Loader. So its expected that you traverse the Nodes, which ever ones that have an mesh index, you would then load that mesh then use the Node's transform values to place it where it needs to be in the world. Nodes assigned to bones, they'll need to be treated differently. Light Nodes, camera nodes, etc. This is kind of how most loaders work that I've looked at.

There is a problem with that, if your use case is Asset Loading, loading everything can be an issue. I have a GLTF file that I store 20+ parts to be used by a Dungeon Generator. In my test page, I load in only 1 wall and 1 floor to create a basic room. If I was loading everything, I would of wasted time, processing and memory on 18+ objects I didn't need.

Ideally you want to support Scene or Asset Loading, so a developer can do what they need and not be pinned into just one way of using gltf. This is the reason why my library exists.

I suggest looking at the root nodes. Start with Mesh. Make sure you have a good function that lets you pull out all the vertex attributes and create whatever object you use to hold the geometry. Keep in mind, Meshes are NOT transform objects. When you load a "Mesh", they can be reused in multiple instances with different transforms, for example there might be 1 Tree mesh, but in nodes, its referenced 10 times, each with a different position, rotation or scale.

Once you have a good "get mesh", then make one called "Get All Meshes", so a developer can just load all the meshes and use them as they want, like, my dungeon crawler, Maybe I want to load all the parts BUT i dont want them to render yet, I will choose to make instances and transforms for them.

Then you can start building a function called load_scene. From there, you can traverse the nodes and when you hit a mesh, you check if you loaded it already, if so, create a transform instance of it, else, load it up then make the instance.

Hopefully you get what I'm getting at. Build an API that that has a lot of fine grain control, then build functions on top of them to abstract away boiler plate stuff. Its like Class Inheritances, but in a functional way.

@Arifd Yes, I mean one mesh with n primitives. Heres the thing, Meshes, Geometry, Primitives, Renderable, etc. these all have the same basic Idea, something 3d to render. They are used in various ways. In GLTF, you have Mesh that is made up of n Primitives. In Threejs, A Mesh is a Transform Instance of A Geometry. For my fungi, a Mesh is Just a Wrapper for a Vao, but I have a Draw Object that can contain n Meshes. So yea... Its all gets very confusing the more stuff you use. As long as you remember they are just the same idea. Sometimes it just means 1 thing to draw, Or more then one word is used to describe the relationship between an Item to draw and a collection of those items.

Also keep in mind, the Raw APIs, be it WebGL, OpenGL, Vulkan, etc... There is No concept of meshes, geometry, etc. All That exists are ByteArrays. You can then group these ByteArrays to define a way to render triangles. Engine developers take these building blocks and try to group em while giving them names, like Meshes, Particles, Textures, etc.

gordonnl commented 4 years ago

Ok! Thanks again for your work and feedback. I've had some time to grab what you've done and merge it with my thoughts and some nice bin loading code from the Threejs loader. I've just pushed a commit here e4902b7 if you want to check it out.

My approach is (hopefully) parallel to the goal of GLTF, in making the parsing as seamless as possible - pushing the data straight to the GPU where possible. What's also great is the process allowed me to make the Geometry class more solid.

You're absolutely right for the non-attribute data bufferViews, thanks for the catch - there needs to be a check to avoid creating gl buffers of these. For the attribute bufferViews though, I still feel pushing them to the GPU and sharing the reference is the approach intended by the GLTF folk.

Thanks a lot for your insight into asset loading, I definitely understand where you're coming from, and it makes a lot of sense. For simplicity sake, I think that by default it would be expected for the entire file to be loaded though. Loading only specific parts would be really cool, but it would be a small minority of use-cases.

arifd commented 4 years ago

Woah, Did you build that from scratch since I opened this issue? Or is this code you have had in your back pocket for a while?? It works so differently from mine and Vor's versions and honestly, I don't really understand it, but congratulations!

Is there anything else in OGL you would like me to put some attention into?

gordonnl commented 4 years ago

Yeah, yeah I just started it the other night. I think it looks like more code than it is because of how I've destructured the different elements to mimic the GLTF spec. Hm that's not good if you can't understand it, which parts do you get lost at? I can definitely add some more comments.

At the moment it's just geometry and nodes, so this is just the beginning :S. Have a look at the Threejs GLTFLoader to see a fully-fledged example haha.

Thanks for your offer for more help! Obviously GLTF still needs a lot of work, but other than that the other things I can think of are fairly complex too: point-light shadow maps, transform feedback, Basis compressed textures, barycentric hit tests, an example of multiple post effects (bloom, dof, etc). I'm sure there's other stuff, I just can't think of them right now.

arifd commented 4 years ago

Well it wasn't so much that your code is impossible to understand, just that it took quite some work, but that's because the gltf json file is just so big, and so much jumping around. You're coding style is so clean I love it. That's actually why I was attracted to this library in the first place!

But you're also using a cool little trick a lot, that has me scratching my head a lot... What you seem to be doing is wrapping the argument that gets passed through in a .forEach in an Object (i.e desc.meshes.forEach(({primitives}) => {});) I see what it does, it sort of unpacks the object within so you don't have to write this: desc.meshes.forEach(e => console.log(e.primitives)). But what's going on here: desc.accessors.forEach(({bufferView: i, componentType}) => {}); it looks like you are creating a custom object to be passed as the argument that pulls out the variables from the objects in the array? i.e. a shorthand of doing this: desc.accessors.forEach(e => console.log(e.bufferView, e.componentType));

Then it just gets way silly right after that, with every function incorporating that in either .map or .forEach method with a ton of variables/arguments. It's why yesterday when I started looking at it, I was just like, 'ugh I can't understand it'

But now I studied it, it's great! So clean! It's like fine art, only when you study it do you really get to appreciate it.

gordonnl commented 4 years ago

haha right I get you. Those are some handy es6 features that reduce the amount of code needed by quite a bit. I totally understand how they're confusing the first time you see them though.

Super quick explanation:

arifd commented 4 years ago

Thanks for your offer for more help! Obviously GLTF still needs a lot of work, but other than that the other things I can think of are fairly complex too: point-light shadow maps, transform feedback, Basis compressed textures, barycentric hit tests, an example of multiple post effects (bloom, dof, etc). I'm sure there's other stuff, I just can't think of them right now.

I'm not afraid of the more complex stuff, it will just take longer, but I understand if you just want to (or maybe it's just easier if you) figure it out yourself, code it and 'bam'! done. I wanna go pro at coding and I literally have zero buddies in the real world who are into this, so I'm trying to make myself valuable to people online, especially when I think I can learn a lot from the process or from the people I'm helping. (Vor, I'm not @-including you, so as to hopefully keep your inbox at peace, but if you're still reading this, that includes you!!)

Maybe you know some people that are looking for help, and you can send me to them? (I can do a bit of C++ too)

In any case, thanks for the object destructuring tip. I think this issue is overdue closure, right? :)

gordonnl commented 4 years ago

Haha yeah I can close it. Good luck mate!