Closed lasselaakkonen closed 4 years ago
@lasselaakkonen your solution makes sense - since maxVerts
is indeed the number of vertices, not the length of storage for vertices.
Please correct me if I am wrong, but I don't believe 6c0ae8d would fix the issue entirely.
Have I understood correctly that that in theory there can be an unlimited number of indices for even a small number of positions? So the limiting factor for buffers is the number of indices and indices must be used for checking if the buffer is full.
In 5f02ce3 I switched from using lenPositions
for checking if the buffer is full to checking lenIndices
.
6c0ae8d happens to fix the original issue for me with at least one model, but it also creates an unnecessary extra BatchingLayer
, I guess because of the ratio of positions to indices there happens to be in the model.
We have a similar issue with http://openifcmodel.cs.auckland.ac.nz/_models/2019030617043%20-%20APHS%205-3-19.ifc. A lot of objects are missing.
It seems to be fixed for this model with the latest commit
I just committed a fix in https://github.com/xeokit/xeokit-sdk/commit/1b424a8ababff35fe5202993eea33a3cc3667847
@Amoki do all the missing objects appear in this test model? https://xeokit.github.io/xeokit-sdk/examples/#loading_XKT_APHS
They all seem to be there!
When models have lots of indices, not all indices are rendered.
I have 0 experience with the xeokit codebase, so take everything below with a grain of salt.
Reproducing
v1.1.0
e4b350afInvestigation
PerformanceModel.constructor()
creates a newBatchingLayer
when!this._currentBatchingLayer.canCreatePortion(positions.length)
.canCreatePortion()
checks for space based on(!this._finalized && this._buffer.lenPositions + lenPositions) < (this._buffer.maxVerts * 3)
That does not seem to be checking for the correct things.
lenPositions
is not guaranteed to equallenIndices*3
(=maxVerts*3
), there can be fewer positions than that (or maybe there always are?). The number of indices seems to be limiting factor when filling out theBatchingBuffer
s?So a new
BatchingLayer
is not created when the current one is already full.This does not cause any errors in code, at least because in
BatchingLayer
when indices are added to the indices buffer inbuffer.indices[buffer.lenIndices + i] = indices[i] + vertsIndex;
, theUint32Array
seems to do simply a no-op when assigning values to out of range array indices (at least on Chrome).So the layer is eventually rendered, but model indices after 5 000 000 are ignored.
The 5 000 000 limit comes from
BatchingBuffer
which definesMAX_VERTS
like this:Possible solution
This is where it gets very dicey. I don't know of all the consequences there are from this. But works for me with at least 2 different models.
In
BatchingLayer
, changing this:To for example this:
And in
PerformanceModel
, in the only usage ofcanCreatePortion()
, changecanCreatePortion(positions.length)
tocanCreatePortion(indices.length)
.When using a model with 6 500 000 indices, this ends up creating two layers, which are both rendered, instead of a single layer with 5 000 000 indices.