Open rmn20 opened 3 years ago
Not to sound cynical, but ...
Its too consists of nodes, stores materials, textures, skeletal animation and meshes as a scene graph.
on this level, this description applies to "nearly all" file formats - or at least, many of them. I won't claim to have deep knowledge about each of the 3D File formats listed at Wikipedia, but I know that it could be tremendously hard to compare them, in any way, or even "objectively": The application cases for each file format are vastly different.
Important dimensions along which a classification could occur:
glTF has been designed with certain relatively clear goals, and these goals had been derived from the requirements of many current applications. On the highest level, one could phrase the overall goal as "bring 3D into the web".
Subjective, anecdotal interlude:
Sure, there have been other attempts to bring 3D to the web. And I cannot write this response here without referring to xkcd: Standards .... And when you refer to J2ME and Java 3D Graphics, I also have to point to Java 3D: Yes, it was once possible to let a complete Java application with efficient, OpenGL-accelerated graphics run in an Applet in the web browser. It's a pity that this never gained traction. Instead of recognizing the role of the Java Virtual Machine as the basis for "Write once, run everywhere", people went on and used the web browser as a "virtual machine" (where the 'instruction set' is JavaScript - ouch...).
Given the requirements today, there are many ways in which one could compare M3G and glTF (see the bullet points above). I'd have to take a closer look at the M3G specification to do a more elaborate comparison. So with the strong disclaimer that these are only a few points that may be relevant, based on a quick glance:
VertexBuffer
and a TriangleStripArray
, but it's not entirely clear for me how positions, texture coordinates, normals, tangents or bitangents are actually encoded. In any case, they appear not to be encoded in a form that is directly renderable. It seems like the data still has to be transformed before it can be sent to OpenGL. In contrast to that, glTF stores the data that will directly be sent to the graphics API.Many of them are related to the extensibility (i.e. the extensions
) of glTF. New rendering features like light sources or material models can be added, clearly specified, and implemented, and when they gain traction, they can become part of the core specification.
But... enough of that advertising for glTF :-)
You mentioned one point that I personally find particularly important, and where I fully agree:
you can store any node in a standalone file and reference to it. This can be userful for storing materials as standalone files. Ive seen games that stored light presets in a standalone file
This is a feature that has already been discussed in the early days of glTF, and I really think that this should be addressed and added as soon as possible. Having a clean, well-engineered concept of composability for glTF assets may be the key for extending its use-cases and at the same time keeping its core nature of being a compact and efficient delivery format for 3D assets.
About geometry data: VertexArray and VertexBuffer are just bindings to OpenGL VAOs and VBAs. While tangents and bitangents werent present yet in OpenGLES 1.1, positions, uvs, normals and colors can be stored in 2 different ways, and in the first case they can be loaded directly into OpenGL. Materials and light sources are quite limited, but they still provide almost full access to OpenGL ES 1.1 functionality. Remember that shaders werent yet present in this OpenGL version. That's why the features are so limited.
Anyway, it is obvious that m3g is outdated, I wanted to point at the structure of this format, because it can be more polished and simple than GLTF one. I just wish GLTF would become more usable.
It certainly makes sense to look for inspiration in other formats. And when there's a good, useful idea, then one can think about how this could be translated or applied to glTF. (Carefully - we want nice features, not niche features).
The extensions are one mechanism for that. The process, very roughly summarized: Everybody can propose an extension. Initially, it is a "Vendor" extension. People may use it or not use it. When there are multiple implementations for the extension, then it can be "promoted" to become an "EXT" extension, indicating that it gained some traction and has more than one real-world use case. If it turns out to be useful and widely adopted, then it can become ratified, and become an official "KHR" (Khronos) extension.
(If you think that you have a good idea of how links between glTF assets could be implemented, you could propose an extension as well: A PR with a JSON Schema and a specification text would probably receive some attention. But you'd have to expect feedback that points out limitations, caveats and implementation difficulties that you didn't have on the radar ;-) ).
I'm curious about this statement, though:
I just wish GLTF would become more usable.
Which aspects of glTF did limit the usability for you? Does this refer to content creation, tooling, documentation, ... ? (Maybe someone can even point you to the tool that will make it "more usable" for you...)
My main issue with glTF is the lack of a standardized way to use resources from external glTF files. Of course, extensions can solve this problem, but I can't be sure that third-party soft will work with them.
I also find the way of storing links between assets quite strange because it doesn't take advantage of JSON. I mean, in JSON you can store objects right where they are used, but instead glTF uses arrays of objects with references. If I didn't miss anything, first you need to load all the JSON objects, and only after that you will be able to resolve the links. M3G uses reference-based ordering and doesnt split nodes into different arrays, so you don't have to go through objects twice, you can serialize nodes immediately after loading them from file. It may be bad for compatibility to store the first-mentioned glTF nodes inside the parent node JSON object, but the M3G way of storing nodes should still be faster and simpler for the delivery format. (Maybe it just doesn't work with the glTF extensions system?)
As I said, links between different glTF assets had been considered, but engineering and specifying this in a way that is sustainable and can sensibly be implemented by clients is not as easy as it may look at the first glance.
If I didn't miss anything, first you need to load all the JSON objects, and only after that you will be able to resolve the links.
That's right. I'm pretty sure that this has been discussed in the early development of glTF, so maybe somebody wants to chime in here. But a quick take from my side: One could probably make a case for storing the nodes
directly (hierarchically) in the JSON:
root: {
...
children: [
{ ... }
{ camera: 2 }
{ mesh: 1 }
{ ... }
{
children: [
{ ... },
{ mesh: 1 }
]
}
]
}
One reason for why this is not applicable for all objects is already shown in this snippet: The mesh: 1
is referred to twice, from different nodes (and this rather means that the same mesh is rendered twice). So there has to be a "referencing system" that operates using indices or IDs, at least for the objects that can be re-used at multiple places in the asset.
One reason for why it may not make sense for nodes is that the hierarchy might become pretty deep. Not sure who well usual JSON parsers manage that.
A stronger technical reason why it does also not make sense for nodes is that there is no sensible way to refer to individual nodes. Of course, one could solve this "pragmatically", by assigning some sort of "id"
to the node. The client could then create a dictionary (map<string, node>
) to look up a node for a given ID, but this raises a lot of questions (about requiredness, uniqueness, valid IDs, error handling and much more). And ... if, at some point, every node had an ID, this would essentially be the same as an array of nodes again...
Not sure if I should have created an issue for this, but I really wanted to share this.
Back when J2ME cell phones were popular, there was a special J2ME API for 3D Graphics, jsr184 or "Mobile 3D Graphics API", which basically consisted of openGLES 1.1 bindings with scene graph and some optimizations for J2ME. Here's some info about it: https://en.m.wikipedia.org/wiki/Mobile_3D_Graphics_API
The most interesting part, is that this API has its own graphics format, WHICH REALLY REMINDS GLTF. Its too consists of nodes, stores materials, textures, skeletal animation and meshes as a scene graph. Altho this format is pretty old and was created for openGLES 1.1, I believe that some inspiration and ideas still can be gained from it. There are even some cool features that GLTF doesnt have, for example, you can store any node in a standalone file and reference to it. This can be userful for storing materials as standalone files. Ive seen games that stored light presets in a standalone file.😳 Also it has a simpler node referencing system, which makes loading this format way easier than GLTF. Here's m3g format specs: https://nikita36078.github.io/J2ME_Docs/docs/jsr184/file-format.html