Open guihomework opened 4 years ago
Going for a 160x160 resolution doesn't make sense. The simulated sensor resolution should match the real one. Having the Gaussian force distribution, the simulated sensor can be used in the same fashion as the real one to obtain hyperacuity. There is no need (and no benefit) to do this directly in Gazebo.
As discussed in our meeting, and to clarify, the 160x160 grid is actually "virtual", just for finer distribution of the gaussian. There is no plan to send out to ROS a higher resolution than the real sensor can provide. One needs the finer grid to localize the gaussian "stamp" inbetween the coarser grid.
The implementation is currently processing each cell for each incoming contact and recomputing the normal projection of the force on each taxel normal. This is valid for complex sensor shapes but should be optimized for flat arrays (single normal direction, one single projection).
Another significant optimization could come from a limit of the real sensors. Current real 16x16 tactile sensors with a resolution of 5 mm2 can be used to find the position of contact with sub-millimetric precision. Let say 10 times better than the sensor physical cell resolution. This means the processing on the simulated 16x16 sensor could be done on a discretized "matrix" of 160x160 (small image). No need to compute the distance for each cell to the contact point. one search of the (i,j) matrix indices where the contact point is would suffice (2 modulo operations should suffice). Then, a pre-computed discretized gaussian distribution in a 160x160 matrix (or most probably way smaller depending on the stdDev) can be "applied" at the (i,j) position with the force amplitude. Like applying "stamps" of gaussian distribution at each contact location. This should cope better with large arrays as the patch is applied only in a small part of the large array, with very few operations.
a PR will be prepared with these ideas