Closed gkjohnson closed 3 years ago
@gkjohnson Hello, today I've discovered this project of yours and it's quite cool!
Reason why I've found your library is because I was googling about how I could implement rectangular selection when using bvh+raycasting method so I was wondering if you already knew how to implement this type of algorithm. For more information about the use cases I'm trying to achieve here I've opened a whole thread at gamedev.stackexchange in case you're interested. These sort of area selection algorithms become obvious when doing color picking but when using bvh+raycasting not so much... at least to me but definitely it'd be great to know the logic behind in order to mimick selection tools like in existing DCC tools.
@brupelo
Hello! I'm unfamiliar with the details of something like Blender's approach to dealing with these problems -- specifically DCC tools must adjust (or rebuild entirely) the acceleration structure whenever a mesh is modified which I'm not doing here. It's also possible that triangle-level acceleration structures are not even used for this sort of interaction.
That aside for selecting triangles I would project them into screen space and detect intersections with the screen selection shape (lasso, rectangle, etc). Depending on the selection shape you can use something like the crossing-number algorithm or the separating axis theorem along with line segment intersections to detect selection. Depending on how you want selection to behave you'll want to make sure you handle corner cases like a triangle entirely encompassing a selection volume and triangles that are behind / partially clipped by the camera near plane.
Is that what you're looking for? This would be a good example to include -- perhaps at some point I'll add it.
@gkjohnson Hi and thanks for the fast response!
That aside for selecting triangles I would project them into screen space and detect intersections with the screen selection shape (lasso, rectangle, etc). Depending on the selection shape you can use something like the crossing-number algorithm or the separating axis theorem along with line segment intersections to detect selection. Depending on how you want selection to behave you'll want to make sure you handle corner cases like a triangle entirely encompassing a selection volume and triangles that are behind / partially clipped by the camera near plane.
This is very interesting because it goes inline with some of the ideas I was considering over here. Ok, let me add some additional assumptions here in order to reduce the complexity of the problem here:
Some thoughts that comes into mind are:
Now, the next problem would be... let's assume we've somehow managed to efficiently identify the screen projected triangles, how would you now identify which ones are the visible selected triangles? Like in blender when solid mode in place?
Is that what you're looking for? This would be a good example to include -- perhaps at some point I'll add it.
Yeah, I definitely think your ideas are a nice first draft that goes into the right direction here, I hope we can refine those ideas to come up with a nice fast algorithm... this problem is extremely interesting and for sure a very good use case to add to your repo. Plus, you won't find almost any documentation on the internet about this particular topic... Only places I've seen so far relevant code was in blender's & meshlab's repo at github... And even so, I'm still unsure how a proper algorithm would look like as their repos requires some understanding about their codebase I don't have ;)
Anyway, any new idea/thought about this very interesting topic please share as I'm really interested about it.
Thanks a lot!
@brupelo
Would it work if we project the BVH nodes to screen space to identify potential triangles living in the screen space selected region?
Sorry I wasn't clear -- yes you should use any acceleration structures you have available to quickly cull triangles and avoid per triangle calculations. That includes checking the bvh nodes against the selection shape and stopping traversal if they do not intersect. You can also check to see if a bvh node is completely encapsulated within the selection shape in which case you know all child triangles are selected.
You definitely don't want to project few dozens millions triangles per operation, right?
Using the above culling techniques you won't have to perform the projection for every triangle. Alternatively you could project the selection shape into 3d and do intersections against that volume but I think this is more complicated and it's not clear to me whether or not it would be faster.
how would you now identify which ones are the visible selected triangles? Like in blender when solid mode in place?
Without spending a lot of time thinking about this I don't think there's any getting around GPU picking of some sort here -- whether it be rendering triangle ids or a depth buffer to check selected triangles against somehow. And as far as I can tell this is what Blender (2.82) does when using "solid mode", as well. You can test this by creating a highly tessellated shape, zooming the camera out, and trying to select it with a selection box. You'll see that some triangles within the shape are not selected likely because they are smaller than a pixel when being rendered to the buffer used for selection. Otherwise you'll wind up checking to see if a selected triangle is obscured by any others which would include finding the other triangles, performing triangle clipping, etc, which is bound to be slow even with some kind of acceleration.
You could get around the subpixel triangle issue (within reason) by adjusting the resolution based on the size of the smallest projected triangle, rendering just the sub viewport where the selection occurs, etc. I'm sure there are other things to try but again I'd have to put more time into thinking about it.
And as far as I can tell this is what Blender (2.82) does when using "solid mode", as well. You can test this by creating a highly tessellated shape, zooming the camera out, and trying to select it with a selection box. You'll see that some triangles within the shape are not selected likely because they are smaller than a pixel when being rendered to the buffer used for selection.
You've had a really nice idea here, I've already tested and you're absolutely right! So yeah, maybe blender is using color picking in certain cases, here's some numbers:
16 bit buffer - 65536 ids 24 bit buffer - 16777216 ids 32 bit buffer - 4294967296 ids
So I'd assume blender would have to use at least 32bit buffer behind the curtains efficiently. On the other hand, considering in "transparent mode" you're also able to select front/back/overlapping triangles maybe color picking is just a part of the puzzle? Ie: in color picking there is lost of information per pixel as you won't get all triangles projected into that particular pixel
Anyway, that's why I'm telling this is a really hard&interesting problem, DCC tools solving this wonderfully and it'd be amazing to know a proper algo about it ;)
For the sake of reference I'm posting some bits of source code I've found:
If you're aware of other source references to learn more about it please don't hesitate to share.
So yeah, in blender there is definitely a different behaviour when toggling xray-mode... you can also see here https://github.com/blender/blender/blob/master/source/blender/editors/space_view3d/view3d_select.c#L1691-L1698 , that's why in "solid mode" you're missing triangles when camera zoomed out and trying to select triangles on highly tessellated meshes.
Closing in favor of #166
Give instructions in example pages
~collision detection~ https://gkjohnson.github.io/three-mesh-bvh/example/bundle/shapecast.html
~get all triangles~ https://gkjohnson.github.io/three-mesh-bvh/example/bundle/collectTriangles.html
~distance to mesh~ https://gkjohnson.github.io/three-mesh-bvh/example/bundle/distancecast.html