Now that the ModelExperimental implementation is pretty far along, we wanted to start measuring its performance compared to the old Model. This issue is to record the results.
Testing Loading Time
To start simple, I performed a test of loading times for a few tilesets to cover some common cases.
Data Used:
CDB San Diego model (EULA. tiled with cdb-to-3d-tiles) - this is a collection of 4 tilesets, a mix of b3dm and i3dm models. This tileset features a lot of textures shared between tiles for the buildings.
There were some differences, as some of the local tilesets were gzipped. In the spreadsheet (see Results below) I marked cases where the tilesets were compressed.
Sandcastle links
I used several Sandcastle links with slight modifications (e.g. changing the enableModelExperimental flag). The spreadsheet includes links for every variation used. The links here are just a representative set.
Note that these links require the local built version of Sandcastle:
All the results can be found in this spreadsheet. For the color-coded cells, green means ModelExperimental was faster, red means ModelExperimental was slower at loading the model.
Performance Stats.xlsx
Preview:
Takeaways
Overall the performance felt comparable, loading times differed by a couple seconds, not tens of seconds or more. That said, in most of the cases, ModelExperimental was slower, so it's worth investigating further.
the CDB elevation and buildings were two cases where ModelExperimental performed better. This may be due to the fact that these models share textures. GltfLoader has better texture caching than the old Model; no duplicate textures should be uploaded to the GPU. However, profiling memory would be a better way to check if this is true.
Point clouds were the area where ModelExperimental performed the worse by percent difference. The new PntsLoader does work differently than the old PointCloud class, so it would be good to profile this further.
Likewise, other cases like the photogrammetry one should be profiled further to see why things are slower.
It would be good to test memory too, though we'd need to implement #9886 first.
Hello~ Anything progress? Now we have some huge 3dtiles data, in a fixed view, the loading time will be longer in version 1.97 than pervious version. Have you investigated the cause of the performance drop? @ptrgags
Now that the
ModelExperimental
implementation is pretty far along, we wanted to start measuring its performance compared to the oldModel
. This issue is to record the results.Testing Loading Time
To start simple, I performed a test of loading times for a few tilesets to cover some common cases.
cdb-to-3d-tiles
) - this is a collection of 4 tilesets, a mix ofb3dm
andi3dm
models. This tileset features a lot of textures shared between tiles for the buildings.b3dm
photogrammetrypnts
point clouds.enableModelExperimental
flag). The spreadsheet includes links for every variation used. The links here are just a representative set.Sandcastle
:Results
All the results can be found in this spreadsheet. For the color-coded cells, green means
ModelExperimental
was faster, red meansModelExperimental
was slower at loading the model. Performance Stats.xlsxPreview:
Takeaways
ModelExperimental
was slower, so it's worth investigating further.ModelExperimental
performed better. This may be due to the fact that these models share textures.GltfLoader
has better texture caching than the oldModel
; no duplicate textures should be uploaded to the GPU. However, profiling memory would be a better way to check if this is true.ModelExperimental
performed the worse by percent difference. The newPntsLoader
does work differently than the oldPointCloud
class, so it would be good to profile this further.CC @lilleyse @j9liu @IanLilleyT