Open GentlesJan opened 2 months ago
Hi GentlesJan,
Thank you for your thoughtful questions. I’ll address each of them below:
To obtain the Semantic Contact Map from the clicked points, we first compute the average contact area size for each finger in both the ARCTIC and GRAB datasets. Once the contact points and their corresponding contact area sizes are obtained, a simple optimal assignment algorithm is applied to establish the specific mapping between the contact points and the contact regions. This approach is akin to using a hand to "color" an object, where each finger's contact area is mapped to corresponding regions on the object.
The Tactile-Guided Constraint is designed to consider all pairs of contact relationships between the fingers and the object. It computes the predicted distances between the hand and the object for each contact point, regardless of whether the relationship is many-to-many.
I hope this clarifies your concerns.
Hi Wang,
Thank you for the excellent work. However, I am a bit confused about two points:
How can we obtain the Semantic Contact Map through clicked points, given that it involves a many-to-many mapping?
As mentioned in the paper, the object points and fingers in the Semantic Contact Map have a many-to-many correspondence. How can we find the correct mapping between object points and fingers within the Tactile-Guided Constraint?
I would greatly appreciate any clarification you can provide on these points.
Thank you for your time and assistance.
Best regards