Closed soadzoor closed 3 years ago
I think @elalish is interested in this too.
Yes, that would be awesome!
I've implemented a web-compatible glTF -> glTF+Draco encoder here:
Usage:
import { WebIO } from '@gltf-transform/core';
import { KHRONOS_EXTENSIONS, DracoMeshCompression } from '@gltf-transform/extensions';
import draco3d from 'draco3dgltf';
const io = new WebIO()
.registerExtensions(KHRONOS_EXTENSIONS)
.registerDependencies({
'draco3d.encoder': await draco3d.createEncoderModule(),
});
const document = io.readBinary(arrayBuffer); // read GLB from ArrayBuffer
document.createExtension(DracoMeshCompression)
.setRequired(true)
.setEncoderOptions({
method: DracoMeshCompression.EncoderMethod.EDGEBREAKER,
encodeSpeed: 5,
});
const compressedArrayBuffer = io.writeBinary(document); // write GLB to ArrayBuffer
Perhaps it's useful for reference, or post-processing GLTFExporter output. I'm not sure that the draco3dgltf
module is web-compatible though; if not you may need to find another recent encoder build from the Draco repository. So far I've just been using it in Node.js myself, replacing the "WebIO" with "NodeIO" in the example above.
@donmccurdy just a question, do you think this Draco encoder in the three.js repo would work for the web?
https://github.com/mrdoob/three.js/blob/dev/examples/js/libs/draco/draco_encoder.js
Btw do you happen to know why is it included two times in the repo?
https://github.com/mrdoob/three.js/blob/dev/examples/js/libs/draco/gltf/draco_encoder.js
See docs in https://github.com/mrdoob/three.js/tree/dev/examples/js/libs/draco —
... do you think this Draco encoder in the three.js repo would work for the web?
Yes, I'm using the same encoder in glTF-Transform when running in a browser.
Btw do you happen to know why is it included two times in the repo?
The Draco library provides two builds... I can't recall the difference at this point, and have filed https://github.com/google/draco/issues/717 to ask.
See docs in https://github.com/mrdoob/three.js/tree/dev/examples/js/libs/draco
No mention of draco_encoder.js
in the docs 😅
Is it different than draco3dgltf
? I guess it can be used in the GLTFExporter, right?
The npm package draco3dgltf
only works in Node.js; the Draco files in the three.js repository all work on the web. I don't think there's any Draco package that works in both environments.
After discussion in https://github.com/mrdoob/three.js/pull/22227, it does not look like we are going to support this in GLTFExporter directly. The changes are complex enough to make GLTFExporter harder to maintain, and I think it is better to spend our effort ensuring its output is simply correct. Lossy steps like Draco are probably better left to dedicated optimization tools.
If you'd like to export a three.js scene to a glTF file in a web browser, with Draco compression, an example using GLTFExporter + glTF-Transform is described in https://github.com/mrdoob/three.js/pull/22227#issuecomment-897310955. The size of the additional library is quite small compared to the Draco encoder itself. Meshopt compression (EXT_meshopt_compression
) or quantization (KHR_mesh_quantization
) can be exported in pretty much the same way.
Even though this issue is closed, just for those who might be interested here is an example web browser code which I am currently using on my end for applying either DRACO or MESHOPT compression to GLB exports.
All seems to be functional on my end but feel free to correct / improve upon it.
These would be the required imports:
<script type="importmap">
{
"imports": {
"three": "https://cdn.jsdelivr.net/npm/three@0.160.0/build/three.module.js",
"three/addons/": "https://cdn.jsdelivr.net/npm/three@0.160.0/examples/jsm/",
"ktx-parse": "https://cdn.jsdelivr.net/npm/ktx-parse/dist/ktx-parse.modern.js",
"property-graph": "https://cdn.jsdelivr.net/npm/property-graph/dist/property-graph.modern.js",
"@gltf-transform/core": "https://cdn.jsdelivr.net/npm/@gltf-transform/core/dist/core.modern.js",
"@gltf-transform/extensions": "https://cdn.jsdelivr.net/npm/@gltf-transform/extensions/dist/extensions.modern.js"
}
}
</script>
<!-- For encoding GLB with draco compression -->
<script src="https://cdn.jsdelivr.net/npm/three@0.160.0/examples/jsm/libs/draco/draco_encoder.js" defer></script>
<!-- For encoding GLB with meshopt compression -->
<script src="https://cdn.jsdelivr.net/npm/meshoptimizer/meshopt_encoder.js" defer></script>
This is how the draco_compress() / meshopt_compress() functions are called in my code, this is applied to all GLB exports:
async function export_gltf( binary = false, alternative = false, draco = false, meshopt = false ) {
if (gltf_obj) {
start_export();
await new Promise( resolve => setTimeout( resolve, 20 ) );
const { GLTFExporter } = await import( "../static/jsm/exporters/GLTFExporter.js" );
let gltf_exporter = new GLTFExporter( manager );
let options;
if (animations.length > 0) {
options = { binary: binary, maxTextureSize: tex_res || Infinity, animations: animations };
} else {
options = { binary: binary, maxTextureSize: tex_res || Infinity };
}
let mesh_clone = alternative === true ? await create_meshes( alternative ) : gltf_obj;
gltf_exporter.parse( mesh_clone, async json => {
let blob;
if (binary === true) {
blob = new Blob( [ draco === true ? await draco_compress( new Uint8Array( json ) ) : (meshopt === true ? await meshopt_compress( new Uint8Array( json ) ) : json ) ], { type: 'application/octet-stream' } );
zip.file( filename + '.glb', blob );
await process_zip( '_GLB' + ( alternative === true ? 'x' : '' ) + ( draco === true ? '_DRACO' : (meshopt === true) ? '_MESHOPT' : '' ) );
} else {
let string = JSON.stringify( json, null, 2 );
blob = new Blob( [ string ], { type: 'text/plain' } );
zip.file( filename + '.gltf', blob );
await process_zip( '_GLTF' );
}
}, function( error ) { console.log( error ); }, options);
}
}
These are the actual functions:
/* Encode with draco compression - geometry */
async function draco_compress( arrayBuffer ) {
const { WebIO } = await import( "@gltf-transform/core" );
const { KHRONOS_EXTENSIONS, KHRDracoMeshCompression, EXTMeshGPUInstancing } = await import( "@gltf-transform/extensions" );
const io = new WebIO();
io.registerExtensions( KHRONOS_EXTENSIONS );
io.registerExtensions( [ EXTMeshGPUInstancing ] ); // read instanced meshes
io.registerDependencies({
'draco3d.encoder': DracoEncoderModule(),
});
const doc = await io.readBinary( arrayBuffer ); // read GLB from ArrayBuffer
doc.createExtension( KHRDracoMeshCompression )
.setRequired( true )
.setEncoderOptions({
method: KHRDracoMeshCompression.EncoderMethod.EDGEBREAKER,
encodeSpeed: 5,
});
const compressedArrayBuffer = io.writeBinary( doc );
return compressedArrayBuffer;
}
/* Encode with meshopt compression - geometry, morphs, animations */
async function meshopt_compress( arrayBuffer ) {
const { WebIO } = await import( "@gltf-transform/core" );
const { KHRONOS_EXTENSIONS, EXTMeshGPUInstancing, EXTMeshoptCompression } = await import( "@gltf-transform/extensions" );
await MeshoptEncoder.ready;
const io = new WebIO();
io.registerExtensions( KHRONOS_EXTENSIONS );
io.registerExtensions( [ EXTMeshGPUInstancing ] ); // read instanced meshes
io.registerDependencies({
'meshopt.encoder': MeshoptEncoder,
});
const doc = await io.readBinary( arrayBuffer ); // read GLB from ArrayBuffer
doc.createExtension( EXTMeshoptCompression )
.setRequired( true )
.setEncoderOptions({
method: EXTMeshoptCompression.EncoderMethod.QUANTIZE, // or EXTMeshoptCompression.EncoderMethod.FILTER
});
io.registerExtensions( [ EXTMeshoptCompression ] );
const compressedArrayBuffer = await io.writeBinary( doc );
return compressedArrayBuffer;
}
Alternative GLB exports, as I call them and are marked as GLBx
and GLBx_d
and GLBx_m
, seem to be functional for meshes and morph animations but not really other animations. These should be used if needed.
The whole code can be seen in my GLTF Viewer just don't get confused with any surrounding code.
@donmccurdy since I updated my initial post to also include the meshopt_compress()
function, there is a quick question I would like to ask you related to your gltf-transform/functions.
If you could take a look at the code of the meshopt_compress() function itself, would your quantize()
function make any change if it was to be included somehow?
I just couldn't figure out if it does any different than just specifying method: EXTMeshoptCompression.EncoderMethod.QUANTIZE
.
@GitHubDragonFly Yes, the meshopt compression method more less requires[^1] quantization. Sorting vertices is also likely to be important for compression. See documentation of EXTMeshoptCompression for the full API, but there's a shorter meshopt()
function that will do most of this for you:
import { MeshoptEncoder } from 'meshoptimizer';
import { meshopt } from '@gltf-transform/functions';
await MeshoptEncoder.ready;
// ...
const document = await io.readBinary( arrayBuffer );
await document.transform(
meshopt({encoder: MeshoptEncoder, level: 'medium'})
);
const compressedArrayBuffer = await io.writeBinary( document );
[^1]: Technically it is possible to apply meshopt without quantization but this is rarely used, does not compress as well, and glTF Transform can't do it for you currently.
@donmccurdy thank you for detailed explanation.
I could not find any way of importing those functions into the browser and make them work. So, for now, I will have to stick with what I currently have and hope to possibly improve it.
@GitHubDragonFly The @gltf-transform/functions
module supports browser environments (it's used in https://gltf.report/), but I suspect that resolving its dependencies with an import map could be the problem. three.js supports import map usage, but import maps don't tend to scale well once you have more dependencies, and some of those dependencies have more dependencies, etc.
So I think the code you have above should work (with my addition about meshopt()
!), but would require the use of a bundler like Vite, Rollup, or ESBuild.
It's also possible that the import map method could be fixed by using a CDN that can resolve transitive dependencies automatically, such as https://esm.sh or https://www.skypack.dev/.
@donmccurdy thanks again, your responses are neat and I will try these new suggestions on my end.
@donmccurdy just so you are aware:
I could be doing something wrong but wanted to let you know.
Could you share a resulting GLB file, without compression?
@donmccurdy there is a zip file at the end of this post with the exports of this original GLB file which is approximately 3.59MB in size: Damaged Helmet
EDIT: All exports were done with textures set to 1k
You can see both Firefox and Chrome exports for the Current Code
and the New Code
, which show that current meshopt compressed GLB export is smaller than uncompressed GLB export, while it's vice versa with this new code:
async function meshopt_compress( arrayBuffer ) {
const { WebIO } = await import( "@gltf-transform/core" );
const { reorder, quantize } = await import( "@gltf-transform/functions" );
const { KHRONOS_EXTENSIONS, EXTMeshGPUInstancing, EXTMeshoptCompression } = await import( "@gltf-transform/extensions" );
await MeshoptEncoder.ready;
const io = new WebIO();
io.registerExtensions( KHRONOS_EXTENSIONS );
io.registerExtensions( [ EXTMeshGPUInstancing ] ); // read instanced meshes
io.registerDependencies({
'meshopt.encoder': MeshoptEncoder,
});
const doc = await io.readBinary( arrayBuffer ); // read GLB from ArrayBuffer
await doc.transform(
reorder( { encoder: MeshoptEncoder } ),
quantize(),
);
doc.createExtension( EXTMeshoptCompression )
.setRequired( true )
.setEncoderOptions({
method: EXTMeshoptCompression.EncoderMethod.QUANTIZE, // or EXTMeshoptCompression.EncoderMethod.FILTER
});
io.registerExtensions( [ EXTMeshoptCompression ] );
const compressedArrayBuffer = await io.writeBinary( doc );
return compressedArrayBuffer;
}
@donmccurdy just so you know, I have no real concern over any of this and am happy with current functionality and if things can get improved then even better.
Just take your time with this.
@donmccurdy just as an FYI, on my end I have also included as an option your WEBP compressTexture
function for both DRACO and MESHOPT.
It seems to do nice things especially with models that can have both DRACO + WEBP compression applied.
@GitHubDragonFly do you mind sharing the exact imports and import maps you're using to make that work with skypack? I think there's a bug, similar to dual package hazard, where skypack brings in two versions of the @gltf-transform/core
dependency, any instanceof
checks fail when used on an object from the wrong copy of the package, and the compression code is failing to clean up data correctly as a result. Your Meshopt 'compressed' models have some unused data in the file which glTF Transform, working normally, would not have left there. A quick way to test this would be to recompress the file on the CLI:
npm install --global @gltf-transform/cli
gltf-transform prune DamagedHelmet.glb DamagedHelmet.prune.glb
gltf-transform meshopt DamagedHelmet.glb DamagedHelmet.prune+meshopt.glb
There's probably some CDN / import map arrangement that avoids double imports but I haven't figured it out quite yet.
...on my end I have also included as an option your WEBP
That's great! For future readers I'll just note that the web version of glTF Transform uses your web browser for image compression. It's generally OK but the Node.js version uses Sharp (https://sharp.pixelplumbing.com/) and can compress images much better. There's been some effort to get Sharp running in WebAssembly but I don't have a working version of that yet.
@donmccurdy here is what seems to be working properly for meshopt
function, at least with my tests that I just did:
<script type="importmap">
{
"imports": {
"three": "https://cdn.jsdelivr.net/npm/three@0.160.0/build/three.module.js",
"three/addons/": "https://cdn.jsdelivr.net/npm/three@0.160.0/examples/jsm/",
"@gltf-transform/core": "https://esm.sh/@gltf-transform/core",
"@gltf-transform/extensions": "https://esm.sh/@gltf-transform/extensions",
"@gltf-transform/functions": "https://esm.sh/@gltf-transform/functions"
}
}
</script>
I tried skypack initially but was getting some weird things, as you explained about dual package hazard, so I switched to esm.sh.
Now we can all relax.
Awesome, and thanks for sharing your GLTFExporter + Meshopt setup steps here!
@donmccurdy Hello, there is a difference when I use WebIO library and use gltfpack cli, and this is the difference.
@RyugaRyuzaki meshopt compresses the data in place, it does not reduce vertex count. gltfpack does much more than just apply meshopt compression. A good start for reducing vertex count in glTF Transform would be to add weld()
and (optionally) simplify()
steps to the existing scripts in this thread. For help with other optimizations or particular models, I'd recommend starting a thread in https://github.com/donmccurdy/glTF-Transform/discussions.
@donmccurdy thanks for pointing this out.
I have just added weld()
option to GLTF based viewers on my end and would suggest to anyone looking up scripts in this topic to actually check the current scripts in my repository (for whatever changes might have been implemented since the original posts).
Hi,
We can already decode glTF files with draco compression on the client side with GLTFLoader, which is awesome, and I think it would be even more awesome if we had the ability to draco-compress the glTF file in GLTFExporter.