Open guihomework opened 3 years ago
I did a trial in blender to produce a taxel center with this updated algorithm
center = compute weighted (by surface area of the triangle) average of centers of all triangles in the taxel mesh
normal = computer weighted (by surface area of the triangle) average of normals of all triangles in the taxel mesh
compute a ray-triangle intersection with ray(origin = center, dir = normal) for each triangle in the taxel mesh. Use clipping (means checks if intersection is within the triangle).
if intersection valid
center = intersection
result is attached for L3r Old centers | New Centers
Thanks. Looks good. Are you filing a new PR to include this improvement in the repo? Is the center+normal computation script already part of some repo?
We need @GereonBuescher to validate first, as the change will affect data. The scripts are part of the blender files, that are not in a repo. I think it is a good idea to put them in a repo, but it should be internal only.
Proximal link of middle and ring finger looks indeed much better on the force to surface projected version. I have a bit doubt if this could produce a higher computational effort (bottleneck for e.g. notebooks or old PCs. Especially when the function takes place in full speed for all sensors (more in future) ) Regarding the change of the Force Point of Effect the shift is quite small. Hence there should be no big impact on a summed up global force vector. When I understand right, the function is primary for better displaying low force conditions on U shaped taxels. May be it would be worth to have a checkbox in Rviz to dis-/en-/able the function in case the computational effort is high.
I have a bit doubt if this could produce a higher computational effort
No, not at all, because this is not computed online. These contact positions are computed once offline and then stored in L*.yaml
files.
I suppose @GereonBuescher thought I had solved the "live" computation of the projection point, depending on a future deformable model. In case of a deformable model, the projection should be computed live somehow but the solution I propose is indeed offline to have an ideal point of contact on the surface and not inside the bones.
Interpolation between 2 opposite taxels that are rather curved (the 2 taxels on index proximal) will still have a center land inside the bone because there is no live computation of the projection.
You are right, it is essentially improving the center of U-shaped taxels.
with futur, I ment devices with (much) more sensors. Now I got how this is offline possible :) So it is not a projection, just the gravity center of the U shpe taxel is shifted (offlline). Iam okay with this solution.
look at the algorithm, the "shift" is done by projecting the weighted average center back the surface. it is a projection, just not a live projection. Cool, i will prepare all new taxels and create a PR.
Then it will be like this, correct?
Just to keep my simple idea alive... In order to visually see the vector in low force condition at the mesh surface and to generate more accurate contact_state condition (origin position). An online generation could be like depicted. A spring like vector c (cushion) shifts the force vector outwards. On higher forces c reduces length (according to the sensor material) and the origin moves towards the hard bone. (which happens at the shadow hand sensors as on the human hand)
Then it will be like this, correct?
correct
Just to keep my simple idea alive... In order to visually see the vector in low force condition at the mesh surface and to generate more accurate contact_state condition (origin position). An online generation could be like depicted. A spring like vector c (cushion) shifts the force vector outwards. On higher forces c reduces length (according to the sensor material) and the origin moves towards the hard bone. (which happens at the shadow hand sensors as on the human hand)
I would actually think the vector c should be in the other direction and have its origin at the outer surface and move inwards. The default center of the contact should be at the surface, and the more the force increases, the more the center should go towards the bone. It solves the visibility for small forces and handles a virtual "cushion".
The main concern with this live computation you suggest is that the displacement is at most 5 mm towards the bone on a human, while our human hand model is already "bulky" and not so realistic, so winning in precision while force value changes is not yet important in relation to the error in size of our model. At the contrary having the center more than 1 cm inside (I think the shift the new projection did for 'rp' taxel is 14mm) is a larger difference that should be solved.
Closed via #11.
I did not want to close this issue as it is not resolved to the liking of our colleague who thinks a dynamic model is better. Closing it make the idea he has vanish.
_Originally posted by @rhaschke in https://github.com/ubi-agni/human_hand/pull/6#issuecomment-703461319_