Closed vvasco closed 5 years ago
According to the protocol we would need to include an "occlusion" within the environment. Also, it's important to set up the point of view of the robot programmatically.
It is important to make the speech as well as the facial expression working on the simulated R1.
The virtual demo is now working and the related code can be found in the branch feat/virtual-robot
.
Instructions for installing assistive-rehab
can be found here, with related dependencies. Optional dependencies including TensorFlowCC, fftw3 and GSL are required.
Replace <install-prefix>
the absolute path where you want to install the project and do:
git clone https://github.com/robotology/assistive-rehab.git
git checkout feat/virtual-robot
mkdir build && cd build
cmake .. -DCMAKE_INSTALL_PREFIX=<install-prefix>
make
make install
feat/face-expressions
:
git checkout feat/face-expressions
yarpOpenPose
and actionRecognizer
.The folder app/scripts
contains the template AssistiveRehab-virtual.xml.template
including all the relevant modules.
The template is installed in <install-prefix>/share/ICUBcontrib/templates/applications
.
You can copy the template in Yarp data directories, remove the .template extension and customize the xml by defining the cuda-machine
node.
To run the demo, first run yarpserver
.
Connect the RealSense to your laptop.
Open yarpmanager
, run the Assistive_Rehabilitation_Virtual_App
and connect.
The simulation environment with R1 should appear, as following:
You can start the interaction using the API, as described in #199.
cc @ddetommaso @PCH313
We want to replicate the Y2M2 demo offline with:
gazebo
showing the exercise.This will allow us to do a back-to-back comparison between the interaction with a real robot and its visual counterpart, in order to evaluate the importance of the robot's embodiment on the user.