realXtend / tundra

realXtend Tundra SDK, a 3D virtual world application platform.
www.realxtend.org
Apache License 2.0
84 stars 70 forks source link

Add support for a custom network-efficient mesh file format. #498

Open juj opened 12 years ago

juj commented 12 years ago

I recently implemented a custom mesh file format to gfxapi. There's a partially functional converter from Ogre .meshes to gfxapi .msh files. Size-wise, they compare as follows in the Oulu3D scene:

Oulu3D meshes in Ogre .mesh format (243 files): 64.4MB Ogre .mesh files packed into one .7z file: 11.1MB

Oulu3D meshes in a custom binary-packed .msh format: 19.0MB .msh files packed into one .7z file: 2.3MB

I know there's still room to compress the .msh files further. In that case, the uncompressed size is estimated to be somewhere around 4MB.

There's a huge potential to improve the scene loading times for Tundra scenes, if a better file format than Ogre .meshes is used, that efficiently uses bits to represent the geometry data.

Stinkfist0 commented 10 years ago

@cadaver BTW how does Urho's binary mesh compare to Ogre ones?

jonnenauha commented 10 years ago

I'm all for using another binary format as long as its sensible and a reader can be implemented in c++ and javascript.

But to be honest zipping up binary mesh files is not that bad. But you still need to request them one by one in the web clients. If we can find a binary format that packs a lot of meshes together, that would be great. I guess the XML3D blast format is exactly for this (http://xml3d.org/xml3d/slides/web3d-blast/#/)

antont commented 10 years ago

glTF and blast / pop buffers are a couple candidates i figure

Ali Kämäräinen notifications@github.com kirjoitti Sep 12, 2014 kello 3:23 PM:

@cadaver BTW how does Urho's binary mesh compare to Ogre ones?

— Reply to this email directly or view it on GitHub.

cadaver commented 10 years ago

The bulk of data in each of Ogre3D meshes, Urho3D meshes and glTF should be vertex data that is ready to be uploaded to a vertex buffer, so in that sense they're equal.

With Ogre3D format we may unwittingly be paying a penalty for information we don't need, eg. stencil shadows edge lists. Try the -e flag in OgreXmlConverter (when converting from mesh.xml to mesh) and see the difference :)

A tool or pipeline step to weed out that unnecessary extra data would be the easiest first step to smaller mesh data.

I believe clb was also after compression achieved with vertex format manipulation, for example using smaller integers for the coordinates where applicable.

erno commented 10 years ago

Some references for compression:

https://code.google.com/p/webgl-loader/wiki/IndexCompression

http://openctm.sourceforge.net/media/DevelopersManual.pdf + http://openctm.sourceforge.net/media/FormatSpecification.pdf

CTM might be a reasonable basis for a common native + browser client format since there are C++ and JS implementations. Bare .ctm doesn't have good material support so it would need something on top of it.

glTF guys have compression on their agenda but have said it's not a feature targeted for glTF 1.0. They have talked about MPEG 3DGC - https://github.com/KhronosGroup/glTF/wiki/Open-3D-Graphics-Compression

There's some talk and benchmark graphs about glTF compression in a recent glTF slideset (Aug 2014): http://presentations.web3d.org/2014/Web3D2014/Workshops/REST%20and%203D%20Workshop/Neil%20Trevett%20-%20glTF%20Overview%20Aug14%20nt1.pdf

antont commented 10 years ago

one of our motives for testing glTF was the geom compression support and it does work in our old demo(s) so it wasn't just talk on their end.

i believe with that 3DGC thing indeed.

iirc the original-nine-blocks gltf webtundra demo which is on-line uses that compression .. the binary size for all geom went down from 23M to 3.6M apparently.

but yah can be that it is currently open somehow .. AFAIK it's just so that you can optionally say in the json that some part is using some codec and those are extensions.

update with demo info:

3.6M Nov 26  2013 MastersceneBlendercompression.bin
 23M Nov 26  2013 uncompressed.bin

the gltf json for that is http://playsign.tklapp.com:8000/glTF-webgl-viewer/model/oulu/mastersceneBlender.json

seems to be like that there:

    "meshes": {
        "AdministrationParkBlock-mesh": {
            "extensions": {
                "Open3DGC-compression": {
                    "compressedData": {
                        "bufferView": "bufferView_2669",
                        "byteOffset": 2481628,
                        "count": 8869,
erno commented 10 years ago

I seem to remember that we did the compression tests with CTM? Tapani did some hacks to get around the lack of material support.

CTM has a lot of tunables and supports optional lossy compression so any single number reported for CTM leaves out part of the story. Eg in those glTF slides it says MPEG is better than CTM and they have a single number (compression ratio) for CTM...

edit:

here's the CTM test: https://dl.dropboxusercontent.com/u/60485425/Playsign/GitHub/OuluThreeJS/index.html

I guess we had tests for both. The binary file there is 216 kB.

antont commented 10 years ago

we tested compression earlier with CTM (also used that to use workers for loading in three.js as the openctm loader there used workers nicely)

later then when tested glTF did that 9blocks compression test with that too.

ah and the data in erno's CTM test link is one block from the high poly version i think -- so not comparable with the "from 23M to 3.6M" of the raw and compressed gltf/Open3DGC bins of the optimized nine blocks, but a good file for benchmarking also.

jonnenauha commented 10 years ago

The thing that consernes me on the gltf slides is the JS decoding time table.

100k triangles: 130 msec desktop and 1045 msec on mobile (S4, not a shitty phone either)

Decoding speed will become even more critical with dense 3D meshes 
generated by 3D digitization technologies (e.g. 3D scanners)
3D Codec can be accelerated by WebCL Kernels or (eventually) hardware

Would be interesting to know if its any faster than to just download the uncompressed thing and push to the GPU directly. Fairly sure on the mobile it will be faster if you are 3/4G or wifi. That time can probably be put to a web worker thread, but it might still be slower to get the whole scene on screen.

If its close or faster then ofc dowloading less stuff if the first priority to drop.

Edit: We currently send gzipped Ogre meshes. Big meshes can be around 4-5 msec to end up into the three.js mesh. I parse the file and create the webgl buffers using typed arrays, its very fast. I'll have to try out @cadavers trick to remove the excess stuff, i might have already tried it, but might be that our current meshes dont have that data to begin with (blender/unity export).

antont commented 10 years ago

yep can make sense at least sometimes to use uncompressed, as there's the native zip support there anyways in the browsers .. what clb said on irc today. also for uncompressed gltf so far seems to me like a sane enough json to point to the array ranges.