Open xeolabs opened 10 months ago
I have a question: does the BCF viewpoint DTO hold the set of all visible objects in the viewpoint? Or it holds just the set of objects of interest?
That is the base data you might have at load time and, it it is not the same, UX-wise, to:
quickly load three doors and a window (the objects of interest), then the rest of the model after a delay
instead of quickly loading all objects visible by the BCF viewpoint, then the rest of the model after a delay
(asking, as the implementation will depend on what data you have at load time)
The BCF contains the IDs of all the visible objects
The BCF also contains the camera eye, look and up, so could also provide the initial view volume
Just starting off this ticket to track thoughts.
As I see, there are two types of streaming:
There are different implementation requirements between these two.
(1) would require an eviction strategy for stale objects, and an efficient way to repack batched scene representations. Pretty complex. I think this is more a "landscape traversal" thing, not quite xeokit's style.
(2) is much simpler for xeokit. This would require modifications to
SceneModel
that allow us to keep adding objects to it. TheSceneModel
could have the ability to callfinalize()
each time we add new objects, which would reallocate data textures and VBOs.The
SceneModel
could(2) is basically where we have a model, but we only load some of it, the best bits, and are able to cancel at any time. Once we have all the objects in memory, we keep them there. The issue we address is shortening the time between loading a model and seeing what we want to see (eg. objects from the viewpoint given in a BCF ticket) then being able to abort the model and move on to the next model or BCF ticket.
Ideas
Proxy objects/metaobjects
Entities
up front, but only load geometry for them once we set their 'Entity.visible' flag true, or maybe set an 'Entity.loaded' flag trueWe would need to be able to build the whole TreeView up front I think.
Would we then still load the XKT data into memory, but then create the actual geometry from it on demand? Or would the XKT be split, and only chunks of it are loaded (HTTP etc), as we need to create geometry?
Streaming based on split models
Experiment with data textures for (1)
I started an experimental branch for (1), where I was going to have a dynamic create/destroy thing happening for data textures. That would involve pre-allocating a bunch of big data textures, then creating/destroying objects in those on demand. However, we need a separate data texture for each RTC origin, because the JavaScript execution space needs to rebuild the view matrix using the RTC origin, in double-precision JavaScript math. hence why we need to chunk things on the RTC origins. The GPU has the single-precision limit, so can't build it's own per-origin view matrix. That leads to a zillion data textures.
Also, it means starting a new type of model representation.
Seems easier to just retrofit the existing
SceneModel
andEntity
with a lazy-object-create and multi-finalize capability as described for (2).Here's the branch, for the record:
https://github.com/xeokit/xeokit-sdk/tree/streaming-scenemodel