Closed skywalkershen closed 1 year ago
Have you ensured that the hierarchy world matrices have been updated before running static geometry generator? The changing in apparent transforms of some of the submeshes would easily be explained by this. Otherwise, though, I won't be able to help without example code or a live example.
I think it might not be the case.
initialize map
||
\/
load config json
||
\/
parse config json and load models for each site
||
\/
on loaded of each site model, run static geometry generator to create bvh and visualizer
the world matrices only change on map camera change, I'm pretty sure before generating bvh that only happened once on map initialization.
1. How does the bvh handle matrix for local space? Where should I put it in my scene tree structure. 2. Why with same logic, different meshes transforms differently.
Now I'm adopting octree for collision detection referencing this case: https://threejs.org/examples/?q=fp#games_fps. For both bvh and octree approaches, my flow is pretty much the same:
From my understanding, bvh and octree only differ in the way they indexing the geometry, octree is dividing by space, while bvh is dividing by object. This makes octree faster in traversing while bvh more agile to fit geometry changes.
In the beginning, I put my octreeHelper as child of scene. It fit well with the initial camera params, but drifts away on pan, and oddly scaled on zoom. It is apparently caused by the change in matrix of world on camera change.
So I adjusted my code, put my octreeHelper as child of world. And before creating octree, make a inversed matrix of current world, then apply it to the octreeHelper. This way, I keep a record of the inverse matrix of the original world matrix when the octree is generated. By applying it to the octree helper, which is the child of world, the worldMatrix of the octree helper becomes the product of current world matrix and inversed world matrix, so though the creation of octree is using worldMatrix of the mesh, it still fits well after the matrix of world has changed, and the visual effect is as expected.
class Collider {
...
update () {
let octree = new Octree(),
{world} = this;
this._matrixInWorld.identity();
this._octree = octree;
this?._helper?.removeFromParent();
this?._helper?.geometry?.dispose?.();
if (!world || !this._origMesh) return;
this._matrixInWorld.copy(world.matrix);
this._matrixInWorld.invert();
this._origMesh && octree.fromGraphNode(this._origMesh);
let helper = new OctreeHelper(octree);
helper.visible = true;
world.add(helper);
helper.applyMatrix4(this._matrixInWorld);
this._helper = helper;
}
The octree helper simply fits the mesh
But for bvh, I'm confused how to handle the change in world matrix change.
From my understanding, the static geometry generator is merely traversing the mesh, merge their geometries, and keeps a record of attributes for diff in the future. I failed to see why different meshes come with differently transformed bvh
Sorry for not able to provide live example. Currently I'm multitasking on several projects, and this one is integrated with so many features, maybe I'll restruct it and come with a example later.
Have you ensured that the hierarchy world matrices have been updated before running static geometry generator? The changing in apparent transforms of some of the submeshes would easily be explained by this. Otherwise, though, I won't be able to help without example code or a live example.
This makes octree faster in traversing while bvh more agile to fit geometry changes.
This isn't incorrect. An octree should be more flexible while a BVH can provide more optimal containment and traversal. Either way the three.js octree implementation is incredibly memory intensive and I cannot recommend it for complex use cases. There is otherwise no relationship between the BVH implementation here and octree in three.js examples so I cannot help or provide insight relative to that implementation.
- How does the bvh handle matrix for local space? Where should I put it in my scene tree structure.
I'm not sure what this means. The BVH is specified in the same space as the geometry it contains. If you want to transform the geometry you have to transform any computations into the local space of the geometry.
- Why with same logic, different meshes transforms differently.
Again - updating the world matrix is important. It's not clear you have done this before running the static geometry generator. See the docs here and where it is called in the bvh example here.
The long write ups make it difficult to understand what you're struggling with. If adding a matrix world update is demonstrably not fixing your problem then please provide more concise and focused questions around what the issue is.
Again - updating the world matrix is important. It's not clear you have done this before running the static geometry generator. See the docs here and where it is called in the bvh example here.
Thank you, I misunderstood what you mean by updating worldMatrix before using StaticGeometryGenerator, it is the cause for the mismatch between bvhVisualizer and the original mesh.
Either way the three.js octree implementation is incredibly memory intensive and I cannot recommend it for complex use cases.
And yes, I managed to make the collision detection works in octree, but the browser crashes with complicated model.
Yet another issue occurred:
I want to update my bvh to fit changes in mesh after animation, the initial bvh fits well, yet if I try to update it, some transformation applies. It seems to be related to camera zoom, I suppose from the transformation of the group "world", but don't really understand how that change in matrixWorld affects bvh. If not updated, the bvh looks fine from any camera angle If updated, the bvh no longer fits to the original mesh
The way I initialize and update bvh:
class Collider {
constructor (meshes) {
if (!(meshes instanceof Array) && !meshes.isObject3D) throw Error('Collider input must be either mesh or array of meshes.')
let meshArr = meshes instanceof Array ? meshes : [meshes];
// meshArr.forEach(mesh => {
// mesh.traverse(child => unifyGeometryAttributes(child));
// })
let generator = new StaticGeometryGenerator(meshArr);
generator.attributes = ['positions'];
let geometry = generator.generate();
// geometry.applyMatrix4(meshArr[0].matrixWorld);
geometry.computeBoundsTree();
let collider = new Mesh( geometry );
collider.material.wireframe = true;
collider.visible = true;
meshArr[0].parent.add(collider);
let visualizer = new MeshBVHVisualizer(collider, visualizerDepth);
meshArr[0].parent.add(visualizer);
this.geometry = geometry;
this.generator = generator;
this._colliderMesh = collider;
this._visualizer = visualizer;
// this._origMeshes = meshArr;
}
set debug (val) {
let show = val === undefined ? !this._colliderMesh.visible : Boolean(val);
this._colliderMesh.visible = show;
this._visualizer.visible = show;
}
get debug () {
return this._colliderMesh.visible;
}
update () {
let {generator, geometry, _visualizer: visualizer, _origMeshes} = this;
// this._origMeshes.forEach(mesh => {
// mesh.updateMatrixWorld();
// })
generator.generate( geometry );
this.bvh.refit();
visualizer?.update?.();
}
get bvh () {
return this?.geometry?.boundsTree;
}
...
Thank you so much for your help.
don't really understand how that change in matrixWorld affects bvh.
The matrix world transform is used to generate a merged geometry with world transform applied in StaticGeometryGenerator
.
if I try to update it, some transformation applies. It seems to be related to camera zoom
Camera transform is not used in the generation of a static mesh or BVH generation - unless you are doing something odd in your application.
You are asking me to guess too much about the architecture of your app. Again if you'd like more effective help please put together a repro case in something like jsfiddle so I can point out what's happening. Otherwise I don't have the bandwidth to dissect and guess at what could be going wrong.
Thank you for your help and sorry for the trouble, I fully understand it is hard to debug what's going wrong with too many uncertainties. I'll try to wrap up things and come with a demo for further discussion.
And may I ask for one more advice?
In my use case, I use large scale photogrammetry 3d tiles as static scene to be collided against and a few meshes as avatars/interactive objects which need more complex physical simulation. For 3d tiles, I only need them to act like real wall and terrains to stop the avatar from directly going through them.
I'm currently using bvh for 3dtiles and other meshes used as static scene, and use cannon/enable-3d for the few interactive objects. In aspect of performance, do you think this is a doable approach?
Thanks again.
I'm currently using bvh for 3dtiles and other meshes used as static scene, and use cannon/enable-3d for the few interactive objects. In aspect of performance, do you think this is a doable approach?
I'm not familiar with mapbox or how you're rendering 3d tiles but I don't see anything inherently wrong with this.
I also recommend asking at the three.js forum or discord for more general advice around three.js and mapbox if you have more questions:
Describe the bug Sorry for not able to provide a live example. I'll try to explain the issue in a concise way and hope you can give me some advice. Thank you so much.
What I'm trying to achieve is some sort of a third person control game on a map with bvh based collision detection, the avatar should be blocked by scene models. I'm referencing characterMovement example and skinnedMesh example.
Main Logics
Map scene and threejs scene synchronization: I used this threebox logic to sync threejs scene and mapbox scene. It puts everything in threejs scene into a THREE.Group called world, and adjust matrix of the group and camera on mapbox camera updates. So when adjusting mapbox camera, the threejs world and camera are also changed, this way, the threejs models can be visually stick to certain geometric position on the map.
In the world group, I wrapped every scene model and avatar with a THREE.Group respectively, and used
StaticGeometryGenerator
to generate MeshBVH since I expect to refit the bvh geometry in place after finishing some scene model animations. For debugging, I added collider and bvh visualizer for every scene mesh and maintained a capsule as avatar based on your characterMovement example the scene structure is like:I followed the
characterMovement
example to shapecast the avatar capsule with bvh, log the deltaVector to check whether the avatar capsule intersects with the mesh (whether it is non-zero vector).Expected behavior
The visual position and shape of scene models, bvh collider and bvh visualizer should be the same.
The calculated collision deltaVector is non-zero vector only when avatar capsule gets adjacent to the scene mesh(visually noticeable)
Actual behavior
If I make my collider mesh(wireframed) and bvhVisualizer as siblings of original scene mesh, like in the characterMovement example and skinnedMesh example, they simply disappear.
If I make my collider mesh(wireframed) and bvhVisualizer as children of original scene mesh, they appear near the original mesh, but with weird rotation and translation.
The overall shape is ok for this one, but the doors are weirdly rotated to another wall The followings are totally off.
In the beginning, I thought it was from local-world coordinates transformation, but I soon find out it was about the model I use. I was generating the bvh with StaticGeometryGenerator, after I checked the source code, I think the collider shape is based on mesh worldMatrix and the merged geometry, it should not change from model to model.
When the camera updates, if I update the bvh, it changes to weird scale and position. the original bvh bvh updated after camera changes The world group and camera is changing as the mapbox camera updates, I suppose that's the reason that the collider matrix changes?
The deltaVector calculated is non-zero, when the avatar capsule is not colliding or adjacent to scene mesh. And when the camera changes, the deltaVector changes though the avatar capsule is not moving and the bvh is not updated. From my understanding the delta vector should be related to relative position of the capsule and the bvh, if both are not moving, that should not change. the deltaVector changes on camera change, though the avatar capsule remains unchanged.
Platform: