Closed LauritsJonassen closed 8 months ago
Hi @LauritsJonassen , it sounds like you'll want to do ray casting against geoms in the environment. For the boxes, you can cast rays against triangles (line-triangle intersection). It'll look something like this:
but instead of a segment you'll want a line intersection. FWIR "Real-Time Collision Detection" by Ericson has a lot of these derivations handy
I am looking to add sensory/depth perception to an agent so he is able to 'see' and navigate the environment before him. However, Brax does not seem to have a built-in method for computing depth such as a range finder function.
My current idea is to have the agent throw lasers in every direction to get the depths and then concatenate the angle and distance value of each ray. In other words, compute the distance and angle from the agent to the object in the environment which is a single mesh object from a LiDAR scan. I will then need to adapt the input size of the neural network so that the size of the observation space matches the input size of the policy network.
Does anyone have any experience with adding depth perception in Brax or a clever idea for how it can be done? Thanks