hsp-iit / project-ergocub

A demonstration of human-robot interaction on the ergoCub through vision.
1 stars 0 forks source link

Additional reactions beyond wave and handshake #12

Open steb6 opened 1 year ago

steb6 commented 1 year ago

In this moment, the action inside the Support Set (and so the ones that we are able to recognize) are the following:

It would be nice to recognize other actions, but we need to consider that at the moment the Action Recognition module works with 3D skeletons that do not contain information about objects or hands. However, we try with some other actions.

Everyone can add here additional actions that would be nice to have:

steb6 commented 1 year ago

Now the model can work with more (or less) than 5 actions and with a variable number of examples for each class. Here there is an example of the current support set, where I added some of the actions cited above;

Alt Text

During this modification, I also discovered the following issue: https://github.com/hsp-iit/workbook-stefano-berti/issues/3

steb6 commented 1 year ago

In the latest experiment, this is the support set:

Image

this is the inference result:

https://user-images.githubusercontent.com/24273206/227920204-1330eb92-ca6c-4199-afb3-32f66de4d0af.mp4