Closed Mann90 closed 1 year ago
More comments: one suggestion for the ray casting API:
intersectObject (object, recursive, tolerance, toleranceType)
The object
could be an array of objects or an object. Thus, we can get rid of 'intersectObjects`.
tolerance
is a number and toleranceType
could be SQUARE or CIRCLE etc.
raycaster.precision
, which is in world units, can be set by the user. As a first step, can you propose a method to set raycaster.precision
so it is approximately equivalent to 5 pixels?
Yes, there is this example: http://threejs.org/examples/webgl_octree_raycasting.html.
How to calculate a precision value so it is approximately equivalent to 5 pixels? I have no idea. Could 5 pixels be 1 or 1000000 in 3D if we use perspective camera?
Also, it is also questionable if we maintain two precision: line precision and facet precision. Why not just use one? It is rare to use different precision to pick lines and facets. Point cloud could use the same precision as well (It seems now there is no precision for point cloud).
How to calculate a precision value so it is approximately equivalent to 5 pixels? I have no idea. Could 5 pixels be 1 or 1000000 in 3D if we use perspective camera?
Correct. And the user may be using a different type of camera.
We can't do what you suggest if we can't make the conversion -- unless we implement picking in screen space, somehow. And that won't even work, because the ray does not have to emanate from the camera position.
Unfortunately, I think this is a no-go.
It would be a go if we knew how to do it :)
Raycasting is a bottleneck for many game-type applications. I have been working on an application where many raycasts are happening at the same time against complex meshes, and performance is not sufficient for application to remain interactive on "normal" desktop hardware. Raycasting has applications for physics calculations and visibility checks, at least for me. From this view, "pixel" has no real meaning, since it's a unit of screen projection, in fact I would say that picking from screen is probably the most primitive usecase of raycast, albeit a frequently needed one. As far as implmenetation goes - i think what you are asking is possible to achieve via shaders, marking various objects with different materials (possibly in a distincts render pass), there you could use varying parameter to output a pixel of specific color to the screen depending on what meterial is being projected (red, blue, #F01234 etc. you can encode quite a few materials into color). A side benefit of doing this might be the fact that you would only require a single pixel projection for 1 cast.
I'm not sure how useful this is to you.
General-case raycast would make little sense with pixels, as they do not exist until fragment shader generates them. You could generate a box-volume and unptoject it from screenspace, but I don't see any computational advantage you would gain from doing so.
It sounds like the Raycaster should not be dealing with screen-space units like pixels. GPU picking is an alternative here, and we have examples for that.
So unless we hope to add a world-space precision parameter on the raycaster, perhaps this issue can be closed?
I suggest the unit of ray cast precision, including the face precision and line precision, could be in pixels. Then the API is much easier to use. (And IMHO, that is the real scenario that how most of users use picking API by specifying the picking tolerance values in terms of pixels.)
Moreover, the precision could be used as squares and circles internally. For example, if a user says the pick precision is 5, then we can use a 5 pixels X 5 pixels square to intersect the objects.
By the way, for performance reasons, we may use octree or quadtree to improve picking performance in the code. I saw the octree is already used in examples but that is an extension.