Open Reon90 opened 3 years ago
@Reon90 I think it's a good idea. Currently, the gltf-test results are updated manually, so I think it is necessary to devise something for aggregation (database creation, automation mechanism, etc.).
I tentatively calculated the coverage in Excel. The current results are as follows.
# | Library | coverage(%) |
---|---|---|
1 | Babylon.js 5.0.0alpha | 100% |
2 | Filament v1.9.10rc | 92% |
3 | RedCube.js v2.5.4 | 92% |
4 | Three.js r125 | 89% |
5 | Hilo3d v1.15.15 | 86% |
6 | PlayCanvas v1.38.4 | 85% |
7 | Cesium.js 1.77 | 81% |
8 | RedGL 2020.03.18 | 81% |
9 | pex-renderer 3.0.0-34 | 81% |
10 | CZPG.js 2018.05.17 | 76% |
11 | Ashes v0.3.2 | 72% |
12 | X3DOM 1.8.2dev | 69% |
13 | Grimoire.js 2017.12.04 | 68% |
14 | ClayGL v1.3.0 | 64% |
15 | Khronos glTF Viewer 1.0.0 | 63% |
16 | GLBoost v0.0.4 | 60% |
17 | ArcGIS JS API 4.13 | 58% |
18 | minimal-gltf-loader 2017.11.09 | 57% |
19 | xeogl 2019.02.09 | 43% |
20 | Unity 2017.3.1 | 43% |
Rather than a full database, the pass/fail result matrix could be stored in a JSON file in this repo. Then there could be automated tooling for aggregating and filtering results, and the "field of checkmarks" could be generated from this rather than updated by hand. There would need to be a (web-based) tool to update a particular result and save out a new JSON file, after viewing a particular test.
@emackey Thanks! The idea of using a JSON file instead of a database sounds easy and good. I'll give it some thought.
To add the test coverage of implementations which based on all tests.
Example: