Closed petrbroz closed 10 months ago
We had a similar requirement where app rules for the visibility of specific rooms/floors/areas in a scanned space needed to also be applied to the loading/order/visibility of the 3d-tiles.
We'd opted to do it a couple ways:
There may be other places to inject app logic ex: the frustum check, but pushing error per tile up/down has been pretty ok so far.
Thank you @dbuck! I'm happy to see that we're not the only ones trying to tackle this kind of use case :+1:
Happy to try to help but I think I need a bit more context for some of this:
- only show elements that are part of a specific logical group (for example, "all the plumbing elements")
- only show elements with "Material" property set to "Steel", contained in a bounding box corresponding to "Floor 2"
In this case you're effectively filtering (toggling visibility of) individual meshes within a single GLB that's been loaded and displayed, correct?
what would be the best way to tell 3DTilesRendererJS during runtime that there are certain tiles that we definitely need to see, and that there are other tiles that we don't care about?
How do you know a priori which tiles need to be displayed without having downloaded the GLBs? Or which parent tiles those tiles are a parent of? The identifying features you want to filter on would live within the GLB files with the EXT_mesh_features
extension, right?
I think some pictures and explanation of tileset hierarchy would help here (ie are these important visual features always at leaf tiles? Or is there a minimum depth required at which point it shows up?). Though at some point you're undermining the purpose of 3d tiles by forcing it to render higher detail where apparent error is already low enough.
Is there are reason you're not just loading a separate piece of geometry to display the toggled critical elements since you're already going to incurring all the costs of rendering the full detail anyway?
Thanks so much @gkjohnson!
In this case you're effectively filtering (toggling visibility of) individual meshes within a single GLB that's been loaded and displayed, correct?
Sorry for the insufficient context. Yes, we currently implement this slice-and-dice functionality by letting 3DTilesRendererJS
load the tileset as usual, and using a simple shader discard based on feature IDs to hide elements we're not interested in. This approach has some obvious flaws, e.g., we're wasting resources by loading and displaying tiles that don't have any elements of interest, and also we may be missing some of the elements of interest because they're in tiles that are not loaded/displayed due to their low priority.
How do you know a priori which tiles need to be displayed without having downloaded the GLBs? Or which parent tiles those tiles are a parent of? The identifying features you want to filter on would live within the GLB files with the EXT_mesh_features extension, right?
We have a system in place that can answer queries like which elements of this design have the "Material" property set to "Steel"
with a JSON looking like this:
{
"<tile-id>": [<feature-id>, <feature-id>, <feature-id>, ...],
"<tile-id>": [<feature-id>, <feature-id>, <feature-id>, ...],
"<tile-id>": [<feature-id>, <feature-id>, <feature-id>, ...]
}
So we know which feature IDs we need, and which tiles these features are included in.
I think some pictures and explanation of tileset hierarchy would help here.
Here's an example 3D Tile dataset of an architectural model generated by our pipeline: sample.zip. We use a simple octree to build the tileset hierarchy. Each tile's content has been optimized using gltfpack, and has feature IDs embedded in it. If the user wanted to apply an arbitrary filter to this design (e.g., "only show plumbing elements on the 1st floor"), we can find the corresponding feature IDs and tile IDs. What we're wondering is - what would be the best way to tell 3DTilesRendererJS
during runtime that these tiles have "higher priority" (and should be loaded sooner if possible), and that other tiles can be skipped or even unloaded.
Here's an example 3D Tile dataset of an architectural model generated by our pipeline: sample.zip.
The tileset you've shared has a fairly unique structure - it's using the "ADDITIVE" refinement for every tile, which isn't how I've seen tilesets used all the much previously. Though what you're suggesting makes more sense in this context but I don't think it makes sense in cases where the "REPLACE" refinement strategy is used. So to that end I don't think it's practical to tell the renderer to load specific tile because I don't see how that can work in the general case.
This question I asked previously is still relevant:
Is there are reason you're not just loading a separate piece of geometry to display the toggled critical elements since you're already going to incurring all the costs of rendering the full detail anyway?
If the user wants to just display or highlight the pipes in the house, for example, is there a reason you can't just load and display all those models independent of the tileset renderer? You're already trying to bypass all the logic the renderer provides and performing things like fragment discard to hide and show model elements can be a fairly expensive operation.
If that's a suitable approach then maybe there's some approach that exposes more of the tile loader internals so the user can add to the model cache and queue system to ensure that the same models aren't loaded multiple times or something in that vein. This will probably be a bigger change than it sounds, though, since currently the TilesRenderer is not designed to share the same loaded model with other systems.
Yep, we're using the ADDITIVE refinement only. I agree that the use case we're discussing here doesn't make much sense for REPLACE refinement.
Is there are reason you're not just loading a separate piece of geometry to display the toggled critical elements since you're already going to incurring all the costs of rendering the full detail anyway?
The sample dataset I provided is just a small house, for simplicity. The scale of designs we're typically working with is much larger (e.g., a block of apartment buildings, or a piece of highway), so there could easily be 100s or 1000s of tiles that contain the critical elements. That's why we're hoping to still be able to rely on the TilesRenderer's data management (to unload tiles that are either outside of the frustum, or don't contain critical elements, and load tiles that are inside the frustum and do contain the critical elements).
If the user wants to just display or highlight the pipes in the house, for example, is there a reason you can't just load and display all those models independent of the tileset renderer?
Yes, we could try that.
But what if the query result, is 'show me everything, but not the pipes" ?
Then the resulting scene, is large & complex. A simple glb renderer, isn't great at managing the load. But the tileset renderer is designed for this situation.
You're already trying to bypass all the logic the renderer provides and performing things like fragment discard to hide and show model elements can be a fairly expensive operation.
You're right, we use pixel discard and GPU picking techniques, to keep the implementation simple to adopt.
The embedded featureID is optional for the implementer - modular. If you can take advantage of it, then your 3D engine can add picking (GPU) and visual filtering (via a discard pixel shader). But it's optional.
Following up, from your idea...
For the 'picking' use case:
Yes, we could switch out a single consolidated-glb with an unconsolidated glb and then use standard ray-cast picking and standard mesh rendering effects.
That would be more friendly for a 3D-engine, and the majority of the scene would still contain 99% consolidated glb's, plus 1 non-consolidated glb, so total draw-call count will be similar, but I think it would double our gib storage cost.
For the visual filtering: "show me just the pipes" - A consolidated scene gives us draw-call performance when the query result is large (1M pipes). If I switch to a completely non-consolidated scene of glb's, then draw-call performance will be limited to small query results ( <500 pipes ).
Overide.json file? So we would like to keep trying to use the tileset renderer for filtering, perhaps adding some kind of "ignore these tiles" as input. ie. maybe an override.json file of the tileset.json.
Another idea, was to simply recreate a new tileset.json that is structurally identical to the original tileset.json, but only containing the visible nodes.
@gkjohnson - thoughts?
@petrbroz
That's why we're hoping to still be able to rely on the TilesRenderer's data management (to unload tiles that are either outside of the frustum, or don't contain critical elements, and load tiles that are inside the frustum and do contain the critical elements).
Got it - so basically here's what you need:
And a couple questions:
I'll have to think through different options for how to handle this. This feels like something that could be supported with a stripped-down version of the renderer that shares a geometry cache.
@wallabyway
A lot of this seems new to this topic - we haven't been talking about gpu picking and it's not clear to me how that impacts the feature. Are you working with @petrbroz? You're right per-pixel fragment discard is ultimately an application-concern and unrelated to this project - I just wanted to suggest that it can have a big performance impact and that just loading only the sub-geometry you wanted to actually render is inevitably better but I know it's a function of storage, asset generation, etc.
Overide.json file?
I want to avoid creating new bespoke file formats and application-specific files making their way into the project. I'll have to take some time to think about other approaches seem a bit more general.
Another idea, was to simply recreate a new tileset.json that is structurally identical to the original tileset.json, but only containing the visible nodes.
This is something you can do now with a second renderer and error target set to 0. The only issue is that you'll load some of the same geometry twice into the scene.
Got it - so basically here's what you need:
- Use frustum culling and loading / unloading logic to run on a specific set of tiles / geometry to limit the memory usage.
- Don't load or display geometry that is already being displayed by the existing error-based render logic.
Pretty much, yes. Basically: if there's any tiles that are marked as "important" or "locked", use frustum culling and load/unload them to control the memory usage, and feel free to unload other tiles if needed.
Is it important that these "locked" tiles be added into the same group as the regular 3d tiles? Ie can this be a separate renderer that lives next to your primary one?
I haven't thought of that but I don't see any problem with using a separate tile renderer if you think that would make more sense.
Do you still need / want these locked tiles to use the hierarchical frustum culling? I assume yes - but at the moment with a simple approach the full tree would have to be searched for these locked tiles down to the very bottom of the hierarchy even if just a couple are "locked".
That's a good question, something I've been thinking about as well. I'd say yes - even the important tiles should still be frustum culled to keep the memory and performance under control. That's why I prefer to call these tiles "important" (as in, more important than others) instead of "locked" (as in, must be displayed at all times).
Are you working with @petrbroz?
Yes, @wallabyway and I are working on the same research. I guess what he meant was that the reason we're currently using potentially expensive shader operations like object hiding using fragment discard (or object picking using a feature ID offscreen buffer) is to keep the client implementation simple.
How about an API to mutate the values of the tileset node’s geometric error ? At run time, I would set some nodes to zero geom error, thus giving them highest load priority.
Basically what dbuck said, but a formal api used at runtime.
How about an API to mutate the values of the tileset node’s geometric error ? At run time, I would set some nodes to zero geom error, thus giving them highest load priority. Basically what dbuck said, but a formal api used at runtime.
I've been thinking about this, and I see some issues with this approach. Let's say you have tile A which is unimportant (i.e., the tile itself doesn't contain any geometry we're currently interested in), and somewhere in the subtree of tile A there's tile B which is important (i.e., it does contain geometry we're currently interested in). If you set the geometric error of tile A to 0, you're basically saying "you will not introduce any error by not refining this tile". My understanding is that in this case the renderer would simply skip the entire subtree of tile A, and it would therefore miss the important content of tile B. So the geometric error of tile A would probably have to be non-zero (recomputed based on its children) but in that case the renderer would still load/render its (unimportant) content before loading/rendering the tile B's content...
{ "<tile-id>": [<feature-id>, <feature-id>, <feature-id>, ...], "<tile-id>": [<feature-id>, <feature-id>, <feature-id>, ...], "<tile-id>": [<feature-id>, <feature-id>, <feature-id>, ...] }
To circle back to this briefly - what is "tile-id"? Is it the name of the glb or tile content that has to be loaded? There are otherwise no required ids associated with individual tiles.
My understanding is that in this case the renderer would simply skip the entire subtree of tile A, and it would therefore miss the important content of tile B. So the geometric error of tile A would probably have to be non-zero (recomputed based on its children) but in that case the renderer would still load/render its (unimportant) content before loading/rendering the tile B's content...
This is right. And even if we can just skip tiles wholly you'll still have to search / traverse through the tree to find the the tiles you want to display based on ID which is slow. It's also the case that tile sets can have references to other external tile sets recursively meaning that the tiles you want may not even be present in the initially loaded set of tiles, which can make this more complicated too.
Another idea, was to simply recreate a new tileset.json that is structurally identical to the original tileset.json, but only containing the visible nodes.
Is there an issue with this approach? If these are tile sets you have then this would be the simplest method, I think. We'd just need a way to prevent duplicate rendering and loading of the same tile between two tile sets. Otherwise I'd think you'd need to report target "important" tiles in addition to knowing all the parents to know where to traverse and where to stop. Which is just about the same as a new tile set, anyway.
What is "tile-id"? Is it the name of the glb or tile content that has to be loaded?
Pretty much, yes. We use a simple string schema for uniquely identifying a tile (and the corresponding glb content) based on its indexed position within the tileset. For example, "0-2-5" refers to the 6th child of the 3rd child of the root tile.
Is there an issue with this approach? If these are tile sets you have then this would be the simplest method, I think.
We don't have the "filtered tilesets" available beforehand but I guess we could create them on-the-fly based on the specific filter query results. We would basically need to take the original tileset JSON, remove the content
property from tiles we don't need, and update the geometricError
values where needed. I'm wondering - does the 3D Tiles spec take into account the possibility that a tile would have no content associated with it? 🤔
Sorry, should've double-checked the spec before posting here. Quoting from https://github.com/CesiumGS/3d-tiles/tree/main/specification#grids:
3D Tiles takes advantage of empty tiles: those tiles that have a bounding volume, but no content. Since a tile’s content property does not need to be defined, empty non-leaf tiles can be used to accelerate non-uniform grids with hierarchical culling. This essentially creates a quadtree or octree without hierarchical levels of detail (HLOD).
So, tiles with no content are covered by the spec, although for a different use case.
With that said, I believe the use case discussed in this thread (marking specific tiles of an already loaded tileset as "unimportant" during runtime) is quite different, and it would be better supported by the tile loader directly.
I'm wondering - does the 3D Tiles spec take into account the possibility that a tile would have no content associated with it? 🤔 ... So, tiles with no content are covered by the spec, although for a different use case.
Empty tiles in a tileset are supported in 3d tiles generally.
remove the content property from tiles we don't need, and update the geometricError values where needed.
Setting the error target to 0 will cause all tiles in a tileset to load regardless of geometricErrror value so only unnecessary content would need to be removed.
With that said, I believe the use case discussed in this thread (marking specific tiles of an already loaded tileset as "unimportant" during runtime) is quite different, and it would be better supported by the tile loader directly.
I assume you mean marking them as "important" - I have already described above why I feel this won't work and a unique tile filename or id is not enough to do this effectively due to traversal requirements and recursive tilesets. Please see my comment here:
Setting the error target to 0 will cause all tiles in a tileset to load regardless of geometricErrror value so only unnecessary content would need to be removed.
Right, but if the error target is non-zero (I believe this will be needed because the filtered tileset could still be huge), the original geometric error would no longer be correct, and it could have a negative impact on the loading order of tiles, isn't that so? Here's a simple example I have in mind:
Imagine that we filter a tileset by removing the content from all tiles except for "0-1-0" and "0-1-1". If the geometric error remains unmodified, wouldn't the tileset loader still prioritize the empty tiles because of their larger geometric error, ignoring the tiles that actually have some content to show? Sorry, this may be just me not having a good understanding of how the loader works internally.
Right, but if the error target is non-zero (I believe this will be needed because the filtered tileset could still be huge)
I don't understand. Do you want to display all the "important" tiles in the frustum or not? If so you should set the error target to 0. You would only set the error target to 0 on the tile set with all the important tiles in it.
and it could have a negative impact on the loading order of tiles, isn't that so?
Load order is determined with this function
the original geometric error would no longer be correct,
The geometric error is a quality of that tile geometry content.
Imagine that we filter a tileset by removing the content from all tiles except for "0-1-0" and "0-1-1". If the geometric error remains unmodified, wouldn't the tileset loader still prioritize the empty tiles because of their larger geometric error, ignoring the tiles that actually have some content to show?
I don't know what you mean by "prioritize". There is nothing to load with empty tiles so they take up no cache space and are skipped during traversal. You would still want to remove "empty" subtrees when generating a mirror tileset, though, since traversal still takes time.
Do you want to display all the "important" tiles in the frustum or not?
Simply put, we want to skip "unimportant" tiles entirely (to save memory), and keep using the HLOD culling for "important" tiles (to keep memory and performance under control). Btw I do realize that this is a slight shift from the original requirement, and I apologize for that. This has been a bit of a moving goal for us. The overall idea however remains the same - we're looking for the most efficient way to load and display a subset of an existing tileset.
Load order is determined with this function
Thank you, that helps. So, if I took a simplified version of this sort (first sorting by depth and then by geometric error), the order of tiles for the example tileset I showed earlier would look something like this, right?
In case of the filtered tileset example, only the tiles "0-1-0" and "0-1-1" would have some content, and others would be empty. I was assuming (perhaps incorrectly; I really need to start digging into the code 😄) that the renderer has some kind of a stop condition when picking tiles from this sorted list to load and/or display. And if that's the case, I was worried the renderer could for example stop after the first 4 tiles - thinking it's already showing sufficient amount of detail - while there's actually no content in these tiles.
There is nothing to load with empty tiles so they take up no cache space and are skipped during traversal.
Oh, so tiles that have no content are not even included in the sorting process, that makes sense. And it also shoots down the theory from my previous paragraph.
So with that, it sounds like loading a modified version of the tileset JSON (where we simply remove the content
property from "unimportant" tiles) should be a feasible approach, assuming that the new tileset JSON can reuse the cached data that may have already been loaded for the original tileset.
assuming that the new tileset JSON can reuse the cached data that may have already been loaded for the original tileset.
Yes this is something that's not supported right now. It would need to be figured out how to:
I took advantage of the 'contentEmpty' flag, and used it to create a 'whitelist'
ie. If the uri doesn't appear in the whilelist of URLs, then set the __contentEmpty=true
When I initialize the Renderer, I set a whiteList like so:
const tileset = new DebugTilesRenderer(url.endsWith('.json') ? url : url + '/tileset.json');
tileset.whiteList = ["0.glb","0-0.glb","0-0-0.glb", etc
and in TilesRendererBase.js
line 225:
if (uri && (!this.whiteList || this.whiteList.some(filename => uri.endsWith("/"+filename)))) {
// "content" should only indicate loadable meshes, not external tile sets
const extension = getUrlExtension( tile.content.uri );
const isExternalTileSet = Boolean( extension && extension.toLowerCase() === 'json' );
tile.__externalTileSet = isExternalTileSet;
tile.__contentEmpty = isExternalTileSet;
} else {
tile.__externalTileSet = false;
tile.__contentEmpty = true;
}
When I select a new 'visual filter', I get a small list of URIs, that becomes the whiteList, and set this in TileRenderer and trigger a preprocessNode
This did the trick.
Example, my 'piping' Is only stored in tiles ["0-0-0.glb", "0-0-1.glb", "0-0-2.glb"], then only these tiles are loaded. I also set the SSE to a low value, so the piping detail appears, but my renderer only renders 3 tiles.
I'm not sure if preprocessNode is designed to be called multiple times for the same tileset. I'm guessing not because the method also sets some default state like tile.__loadingState = UNLOADED
. @gkjohnson?
My impression from our lengthy (sorry about that) discussion here is that we will need to create a duplicate tileset.json, remove the content property from tiles we don't need, update the geometry errors (to avoid potential issues I explained here), and replace the original tileset with the filtered one, while somehow making sure that 3DTilesRendererJS can reuse the cache from the original tileset.
Example of the earlier noted adjusting/overriding the error calc to give the client app more control. Tweaking the tile.__error property to adjust error by a multiplier or forcing the error to be above or below the current app errorTarget is helpful to enable better filter/visibility control at the app level.
The criteria/example below is purely specific to this one application, and just a strawman that it might also apply to others
// Monkeypatch calculateError method to allow adjustments to the effective SSE
const origCalcError = (tilesRenderer as any).calculateError.bind(tilesRenderer);
(tilesRenderer as any).calculateError = (tile: MttrTileGroup): void => {
// get the original error to start from
origCalcError(tile);
if (this.adjustScreenSpaceError) {
tile.__error = this.adjustScreenSpaceError(tile.__error, tile);
}
};
const TILEDMESH_SETTINGS = {
/** Reduce load of tiles which app considers hidden that might be in the frustum */
errorMultiplierHiddenFloors: 0.01,
/** Reduce load of tiles which we deem hidden behind other walls */
errorMultiplierRaycastOcclusion: 0.1,
/** Artificial minimum lod level we'll allow loaded geometry to downgrade to, varies at runtime */
minLod: 0,
/** Error target, initial value, adjusts at runtime based on perf/scene size */
errorTarget: 4,
};
// Custom adjustments to the calculated SSE for a tile.
// A smaller error lets coarser LODs to be used nearer the camera. E.g. halving the
// error halves the distance at which each LOD will kick in.
this.adjustScreenSpaceError = (error: number, tile: MttrTileGroup): number => {
// If neither this tile nor any of its descendants have been seen recently, demote it
if (TILEDMESH_SETTINGS.errorMultiplierRaycastOcclusion !== 1) {
const tileSeenAt = tileSightings.get(tile) ?? -Infinity;
if (tileSeenAt < sightingCount - TEXTURE_STREAM_SETTINGS.sightingMaxAge) {
error *= TILEDMESH_SETTINGS.errorMultiplierRaycastOcclusion;
}
}
// Dollhouse/floorplan modes: lower priority of faded floors
if (TILEDMESH_SETTINGS.errorMultiplierHiddenFloors !== 1) {
// de-emphasize tiles which the app has faded/fading out
const visible = this.roomMeshesByTile.get(tile)?.some(roomMesh => roomMesh.getOpacity() > 0.6));
if (!visible) {
error *= TILEDMESH_SETTINGS.errorMultiplierHiddenFloors;
}
}
// Enforce minLOD by pretending tiles lower than that have a large error
if (getTileLod(tile) < TILEDMESH_SETTINGS.minLOD) {
error = Math.max(error, TILEDMESH_SETTINGS.errorTarget + 1e-10);
}
return error;
};
That, combined with some tweaking of the tile sorting rules in the priorityQueue as a simpler replacement for the one in https://github.com/NASA-AMMOS/3DTilesRendererJS/blob/master/src/base/TilesRendererBase.js#L14 has seemed to perform pretty ok for us to provide some finer control of what's loaded on screen.
// prefer in frustum
(tile) => Number(tile.__inFrustum),
// prefer tiles which will be visible at the current view per error target
// intention: defer higher error, but not ultimately visible tiles?
(tile) => Number(tile.__error <= TILEDMESH_SETTINGS.errorTarget),
(tile) => tile.__error,
(tile) => tile.__distanceFromCamera,
I took advantage of the 'contentEmpty' flag, and used it to create a 'whitelist ...
There's no need to modify the "preprocessNode" function or TilesRenderer internally or the internal __contentEmpty
flag. You can use the existing onLoadTileset
function and remove the content field from the nodes necessary.
I'm not sure if preprocessNode is designed to be called multiple times for the same tileset
That's correct. It does not get called multiple times.
My impression from our lengthy (sorry about that) discussion here is that we will need to create a duplicate tileset.json, remove the content property from tiles we don't need
That's right. And that can happen on the client or on the server side. As I've mentioned, though, even after doing that you'll still probably want to do some kind of culling to the empty and intermediate bounds hierarchy since I'd imagine in common cases you'll have a complex tree for representing a sparse amount of tiles that will have to be traversed.
update the geometry errors (to avoid potential issues I explained here),
You'll have to elaborate on which issues. There shouldn't be any changes to the error value needed in order to load the tiles - you just need to set the renderer "errorTarget" to 0 on the TilesRenderer and it will load everything.
replace the original tileset with the filtered one,
I assume you mean create a new tiles renderer and hide the original "full" tile set here. Or display both if that's what you want (since you mentioned that originally).
somehow making sure that 3DTilesRendererJS can reuse the cache from the original tileset.
Yes - this is a new feature that needs to be added to the library if it's important for your use case. Otherwise when sharing a cache between two tiles renderers it will load the model twice and render it twice, though that might not be a huge issue.
If it's something you need I'd be open to contracting work to add it. Of course if you have proposals or a would like to submit PRs I'm happy to help those along, too!
@dbuck - cool approach. +1 I'll try the new 'priority cue' change. That was the bit I missed, and explains the trickle down SSE effect @petrbroz was explaining in the diagram.
@gkjohnson - Yes, all of the above. ;-)
I'll try the new 'priority cue' change. That was the bit I missed, and explains the trickle down SSE effect @petrbroz was explaining in the diagram.
This is making it a lot more complicated than it needs to be. There's no need to change the geometric error value of empty internal tiles in order to load the child tiles in this case.
@gkjohnson sorry, I still don't get why updating the geometric error is not needed. Could you please help me understand that?
Let me explain the specific situation I have in mind using the diagram from earlier:
Let's say we're currently viewing the "unfiltered" tileset using a specific camera position, camera direction, and a non-zero error target. My understanding is that 3DTilesRendererJS chooses certain subset of tiles with the highest priority that should be loaded and displayed for this particular configuration. Let's say that for the current camera position, camera direction, and error target, the renderer decides to pick the first 4 tiles with the highest priority - those would be the tiles "0", "0-2", "0-0", and "0-1".
Now, imagine that we switch to the "filtered" tileset, without changing the camera position, camera direction, or the error target. If the geometric error of individual tiles remains the same, wouldn't 3DTilesRendererJS still pick the first 4 tiles ("0", "0-2", "0-0", and "0-1") even though there's no content in them? Or does it actually skip tiles with no content? How does 3DTilesRendererJS decide how many tiles should be loaded/displayed at any given point in time?
My understanding is that 3DTilesRendererJS chooses certain subset of tiles with the highest priority that should be loaded and displayed for this particular configuration.
When there are no empty tiles the 3d tiles spec dictates that tiles with the computed error value (based on geometric error & camera distance) should be loaded and displayed. For the sake of visual coherence the tiles are loaded from the top down until the error target is met to ensure that the time spend with missing tiles or "holes" in a tileset is minimized. The priority algorithm only determines what gets loaded first (typically the things with the computed error furthest from the specified target error value).
without changing the camera position, camera direction, or the error target
Why are you not changing the error target? In the beginning you wanted to display all the marked tiles in the frustum. If this is the case you just need to set errorTarget to 0.
I do recall now, though, that you mentioned in https://github.com/NASA-AMMOS/3DTilesRendererJS/issues/401#issuecomment-1770851613 that this has changed a bit but it's not clear how. Is it the case now that you want to continue using the the leaf node "important" tile geometric error and not load these filtered tiles that are far away from the camera?
There is nothing to load with empty tiles so they take up no cache space and are skipped during traversal.
I should correct myself on this statement, as well. Tile traversal does stop at empty tiles if the error metrics dictate they should. It used to be the case that the loaded did not and I'm misremembering. The behavior was fixed in #119.
I do recall now, though, that you mentioned in https://github.com/NASA-AMMOS/3DTilesRendererJS/issues/401#issuecomment-1770851613 that this has changed a bit but it's not clear how. Is it the case now that you want to continue using the the leaf node "important" tile geometric error and not load these filtered tiles that are far away from the camera?
Yes. The initial goal was to show all important tiles but we realized that this would not be feasible due to the complexity of our datasets. Instead, our goal is to show more of the important tiles, while still keeping the performance and memory usage under control.
Now, it looks like there's two ways to achieve this goal:
The second approach seems like a better fit for our use case, as long as we could reuse the cache from the original tileset. Earlier you mentioned that this is currently not supported but looking at the README it looks like cache sharing is actually available. I'll try and explore that route.
Earlier you https://github.com/NASA-AMMOS/3DTilesRendererJS/issues/401#issuecomment-1792096382 that this is currently not supported but looking at the README it looks like cache sharing is actually available. I'll try and explore that route.
Sharing a cache has always been possible. But that cache indexes based on the tile javascript object. So if two tilesets refer to the same geometry url then both tiles renderers will still load the same file twice and insert it into the cache twice and added into the scene twice (and thus it will get rendered twice).
I see, thanks for the clarification :+1: Let me dig around the code (starting here I guess), and see how we could solve that. Perhaps index the cache by some kind of a hash computed from the tile data, instead of the tile javascript object itself?
Hi there,
This issue thread is probably be the closest existing on a topic of interest we have at iconem: Clipping/Cropping 3d-tiles tileset (in/out) via clipping planes, oriented boxes or polygons
- if it is better to open a dedicated thread, please don't hesitate.
We are using your implementation of the 3d-tiles threejs renderer + other building blocks to sync multiple typologies of 3D-data: OGC-3D-tiles (hence also google 3d-cities), Potree tiled pointclouds (photogrammetric 3d-scans), gaussian splats, oriented images and orthos, etc. All these datasets live in a single threejs scene and are placed in a unified geo-referenced coordinate system.
Since there is strong overlap between our 3d-scan (for example, a monument) and the google 3d-tiles (the basemap/background of teh whole area or city), we would like to filter out the 3d-tiles tileset, clipping or cropping the inside (or the outside). Some example implementations are the way the cesium js lib does it via clippingPlanes collections, oriented bbox, or polygon projected along the vertical.
The question is: do you think the procedure described here would be the way to go to accomplish this cropping/clipping? Editing the calculateError(tile)
logic to check if the tile bbox don't resides within the described clipping entity, then set its error to a large value. Plus is this something that could be interesting to be generalized into dedicated logic within the renderer lib. As always, thanks a lot for this amazing work!
Hi @jo-chemla - sounds like you're more interested in spatial clipping rather than filtering individual tiles or components for a dataset. I think it's best to make a new issue with some images and more concrete examples demonstrating what you're going for.
Hi @gkjohnson, indeed spatial clipping/cropping is exactly what I'm looking for, will open a new issue describing the details and example implementations out there!
@petrbroz I'm going to close this for now since it seems like there's no changes needed, is that right?
@gkjohnson yes, I believe this one can be closed. We're planning to explore the "modified tileset.json" approach (where instead of customizing the renderer itself we just load a modified version of the tileset.json with the unimportant nodes removed). We may still need some help with the caching support for that, but that would be a separate issue.
Thanks for all your help and feedback!
First of all, thank you for this amazing project!
Context
We're running a research project focused on viewing extremely large CAD designs (e.g., infrastructure). We generate 3D Tiles (v1.1) datasets with glb content optimized using gltfpack, with additive refinement, and with EXT_mesh_features feature IDs that point to per-object metadata in our backend. We can successfully load and render these tilesets in a vanilla three.js app using your library.
Problem
An important part of the viewing experience that we're researching is the ability to slice-and-dice the CAD design dynamically based on its metadata, for example:
We can already identify the tiles containing elements that match these queries, and the question we're facing right now is: what would be the best way to tell 3DTilesRendererJS during runtime that there are certain tiles that we definitely need to see, and that there are other tiles that we don't care about?
Describe the solution you'd like
It would be nice if we could specify some kind of a flag (or a callback function) on selected tiles to override the standard logic that decides which tiles get loaded so that: