Closed vvasco closed 1 year ago
I report here the instructions to run the demo using docker.
The current development is on the branch feat/docker
. Once merged, I will add the instructions on the official website.
To run the demo:
start yarp
yarpserver
yarprun --server /r1-base
yarprun --server /r1-console
yarprun --server /r1-face
yarprun --server /r1-torso
run yarprobotinterface
on r1-base:
cd /usr/local/src/robot/robots-configuration/R1SN003
yarprobotinteface
on r1-console:
yarpmanager
AssistiveRehab_TUG
which includes the camera and iSpeak (the face is not powerful enough to use docker)on iiticublap194:
on r1-base:
Note: the connections with the compose take some time. They can be checked in the yarpmanager, by refreshing the application.
Current issues:
424x240 px
resolution, while cer_gaze-controller
uses 320x240 px
. We need a conf file for the controller relying on 424x240 px
.Yesterday we tried to setup the demo on the robot with @mfussi66, @elandini84 and @ste93.
We had some issues in detecting the start-line
using a camera resolution of 424x240px
.
The estimation in the 2D image was always wrong (the line and the robot were placed in approximately the same position as the tests done in July, i.e. line entirely visible including the white part and robot neck pitch at ~ 22 degrees):
When using the resolution of 640x480px
the estimation was very good instead:
With this resolution I had many issues in the past with yarpOpenPose
being too slow, so I would avoid it.
I tried again in the late afternoon with 424x240px
and the line estimation was good, using the maximum neck pitch (~28 degrees) and placing the robot farther from the line than usual.
Maybe this was due the combination of the camera / robot position and the changed light conditions? Maybe the color of the floor affects the detection? I don't get what's changed compared to the tests done in July though.
I did some more tests using the standalone realsense (D415) at a resolution of 424x240px
, placing the camera in different positions and changing the line position to vary the light conditions.
The results were always good, here some examples:
|
|
|
---|---|---|
|
|
|
|
|
|
We will do another round of tests on Monday morning on the robot and verify this again. Maybe we just need to move the robot neck further down than usual.
I've created the label https://github.com/robotology/assistive-rehab/labels/%F0%9F%8E%93%20prj-etapas for the tasks aimed at the project.
about using 640x480 and interaction with opepose, maybe you can downside the images again when feeding them to openpose?
about using 640x480 and interaction with opepose, maybe you can downside the images again when feeding them to openpose?
I think this could be a valid option, @ste93 also proposed the same. I would give a second chance to the lower resolution anyway and if this does not work go with this alternative.
Upon suggestion of @ste93, we are now confident that the wifi button shows delay because the network that it connects to does not have internet access, therefore the robot must be connected to the internet to lower the delay to 1-3sec (5sec max). This is most likely caused by the manufacturer software.
With @mfussi66 @ste93 @elandini84 we ran few tests in the last days. This is the list of issues we encountered and a description of how we solved / bypassed:
424x240
for depth, which can be listed using rs-enumerate-devices
. We changed the port it was connected to, at some point it started working (even using the same port that was not working before)424x240
depth images had artifacts (horizontal bands) which made them unusable (640x480
were fine instead). Swtching off the lights did not help. We started using the realsense mounted in the robot head which supports 320x240
resolution and does not have artifactslineDetector
resulted in a good estimate without changing the relative position line / robotdocker-compose-console
on the second PC), solved the issue. We saved bandwidth as follows:
zfp
, which gave use some issuesskeletonRetriever
and lineDetector
we open the rgbd device to retrieve the camera parameters. However we noticed that it keeps receving depth images from the camera, which we don't use (we get them from yarpOpenPose
). Closing the rgbd device after the camera parameters are retrieved solves this issueWe finally managed to run several times the demo successfully. This is a video showing the functionality (very bad quality but it's just to show the current status):
https://user-images.githubusercontent.com/9716288/190361504-ad5eb2d3-b0ce-4b89-8dd1-85b805d2dd43.mp4
Note: @mfussi66 raised the hand from the beginning to have the robot focusing on him rather than on iCub3 (which is also detected as a skeleton)
I'll open the PRs for fix/etapas-demo
and feat/docker
and update the instructions on the website.
Open points:
Nice progress 👍🏻
A few comments below.
sometimes we still had issues when detecting the lines. Relaunching lineDetector resulted in a good estimate without changing the relative position line / robot
I think this still represents a problem to solve. If it depends on the resolution of the camera somehow (to be clarified) then we should definitely consider increasing the resolution and/or decide which camera to pick (remember that we need to account for the correct transformation if we go with the top-head camera). In this respect, this further comment holds.
Also, the camera goes along with the quality of the data we can estimate that is relevant to the experiment. I believe this will be tested in the upcoming days.
as per https://github.com/robotology/assistive-rehab/issues/306#issuecomment-1246971409, we can use the wifi button with an acceptable latency only if the robot is connected to internet. We connected it through ethernet cable.
I was told that the latency may still reach 3 seconds in some conditions, which is kind of slow.
connecting the robot to internet through ethernet cable is not the best when the robot navigates. @ste93 told us that they used to have a dedicated mobile phone connected to IIT wifi and the robot connected in tethering through usb-c. The phone was placed on the robot through a phone armband
This could be a good alternative indeed.
We also need to address the problem with the wheels' grip material: https://github.com/icub-tech-iit/tickets/issues/2355 (⚠️ private link).
We also need to address the problem with the wheels' grip material: icub-tech-iit/tickets#2355 (⚠️ private link).
We arrived yesterday in the robot arena and the grip was glued back on (not sure who did it), it seemed quite sturdy and reliable.
We arrived yesterday in the robot arena and the grip was glued back on (not sure who did it), it seemed quite sturdy and reliable.
Maybe @fbiggi?
Speaking with @maggia80 it came out that we could think of redoing the wheels with a proper gripping footprint. Don't know how long would it take though.
sometimes we still had issues when detecting the lines. Relaunching lineDetector resulted in a good estimate without changing the relative position line / robot
I think this still represents a problem to solve. If it depends on the resolution of the camera somehow (to be clarified) then we should definitely consider increasing the resolution and/or decide which camera to pick (remember that we need to account for the correct transformation if we go with the top-head camera). In this respect, this further comment holds.
I agree this has to be fixed. This happened with both cameras (the one on the top of the head and the one in the robot's head) at low resolution. It's very weird to me that lineDetector
was not able to estimate the line and after relaunching the module the estimate was good, given that the position of the robot and the line did not change and the light conditions were the same. Probably we can go for this option then.
I was told that the latency may still reach 3 seconds in some conditions, which is kind of slow.
Yes sometimes this also happened, and it used to happen also in the past. I think the latency gets worse when the battery charge starts decreasing. Before running the tests we could make sure that the button is completely charged.
t's very weird to me that lineDetector was not able to estimate the line and after relaunching the module the estimate was good, given that the position of the robot and the line did not change and the light conditions were the same.
Some "memory effect"? Meaning that throughout operations we run code branches that we don't visit at startup, modifying somehow the logic of the check, slightly but decisively sometimes? If so, it's not even said that increasing the resolution will be resolutive. Just speculating.
Yes sometimes this also happened, and it used to happen also in the past.
👍🏻
I think the latency gets worse when the battery charge starts decreasing. Before running the tests we could make sure that the button is completely charged.
Ok, so we could monitor this behavior by trying to do the experiments always with a good charge level. We can also buy a second button, just in case. What do you think?
they used to have a dedicated mobile phone connected to IIT wifi and the robot connected in tethering through usb-c. The phone was placed on the robot through a phone armband
It's possible, of course, but do you have issues using R1's standard router on a desk? Here is the connection diagram:
ROBOT ---- wifi ------ router ------ internet
|---------- laptop1 (no wifi)
|---------- laptop2 (no wifi)
etc....
t's very weird to me that lineDetector was not able to estimate the line and after relaunching the module the estimate was good, given that the position of the robot and the line did not change and the light conditions were the same.
Some "memory effect"? Meaning that throughout operations we run code branches that we don't visit at startup, modifying somehow the logic of the check, slightly but decisively sometimes? If so, it's not even said that increasing the resolution will be resolutive. Just speculating.
I need to investigate on this. So far at higher resolution I've never seen the same issue, but it's also true that I ran less tests.
Yes sometimes this also happened, and it used to happen also in the past.
👍🏻
I think the latency gets worse when the battery charge starts decreasing. Before running the tests we could make sure that the button is completely charged.
Ok, so we could monitor this behavior by trying to do the experiments always with a good charge level. We can also buy a second button, just in case. What do you think?
Definitely. I have another mystrom button (I bought it long time ago for testing). We can configure it on the robot wifi and alternate it with the other.
they used to have a dedicated mobile phone connected to IIT wifi and the robot connected in tethering through usb-c. The phone was placed on the robot through a phone armband
It's possible, of course, but do you have issues using R1's standard router on a desk? Here is the connection diagram:ROBOT ---- wifi ------ router ------ internet |---------- laptop1 (no wifi) |---------- laptop2 (no wifi) etc....
thanks @randaz81, that would be another possibility
I have another mystrom button (I bought it long time ago for testing). We can configure it on the robot wifi and alternate it with the other.
Cool! Let's do it 👍🏻
about the microphones as you probably know on the 5gtours project we used an external mic mounted on the head of R1, it worked ok in our experiments, provided people spoke with the correct timing and within the microphone "field of view". We have been also testing other OEM microphones for better integration but I think this may still need some time.
nice progress though!
Hi @mfussi66
At the end of the day, could you make a summary of the current status?
Today in the morning we tackled the motionAnalyzer: we started off by tweaking the setup of the Yarp modules to save an stabilize the bandwidith as much as possible, to make sure that the skeleton data would be reliably received. However, the step length analysis is still not reliable. We will check if changing the skeletonRetriever parameters can help, but we are having doubts about the reliability of the step length computation.
However, the step length analysis is still not reliable. We will check if changing the skeletonRetriever parameters can help, but we are having doubts about the reliability of the step length computation.
A couple of related resources:
It may be convenient to record a data stream to recreate the skeleton on the same data offline and analyze what's happening in detail, with approaches similar to those in #178 and #279.
The effect of tuning of the skeletonRetriever
's parameters could be judged offline too.
For the record, these are the parameters we have used in simulation for the paper:
skeletonRetriever --camera::fov "(54 42)" --camera::remote /SIM_CER_ROBOT/depthCamera --depth::kernel-size 3 --depth::iterations 1 --depth::min-distance 0.5 --depth::max-distance 10.0 --filter-keypoint-order 5.0
These params are for simulation; don't we have analogous ones for the real TUG experiments that we conducted in the kitchen?
Be careful as filter-keypoint-order
is int32
so we better ought to specify --filter-keypoint-order 5
to avoid incurring into the "conversion hell".
I mentioned the parameters used in simulation because we benchmarked the metrics on that scenario. For the real TUG experiments in the kitchen I used these values, but it was with a different camera, we might need some fine-tuning.
Here is a dataset dump that can be used to replay the yarpopenpose and depth data, so offline processing can be performed:
https://github.com/robotology/assistive-rehab-storage/tree/openpose (Edited link)
I created a branch (https://github.com/robotology/assistive-rehab/tree/feat/replayer) in which I added a docker-compose-replay, that launches only the components for post-processing, mainly yarpDataPlayer and motionAnalyzer.
Instructions for posterity on how to use docker-compose-replay:
docker-compose -f docker-compose-replay.yml up
-> the compose will launch a container with a yarprun --server /replayerskeletonRetriever
yarp rpc /motionAnalyzer/cmd
and launch the commands
Nice work @mfussi66 👍🏻 My only point is that it'd be much better to post the dataset on https://github.com/robotology/assistive-rehab-storage (LFS enabled).
To recap the past days:
motionAnalyzer
is now able to work offline, the relevant PR was merged: https://github.com/robotology/assistive-rehab/pull/318navController
to make it work offline: https://github.com/robotology/assistive-rehab/commit/9b8dd3f263bae206346b97dd06eab03028bf1089 It was tested successfully offline, we will open a PR when we will run a TUG demo.On Friday, I inspected the skeletonRetriever
skeleton data structure that is outputted towards the skeletonViewer
, and noticed that the computed ankles distance is compatible with the expected value, while taking a known step. E.g., if the step I took was 50cm, the skeletonRetriever
would return ankle poses that would result in a ~44cm step.
Besides the 10% discrepancy, the results were consistent when taking other steps.
Therefore, today me and @ste93 took a look at the motionAnalyzer
. We did not really want to touch the code, but we could not really understand why some specific operations of the step length estimation were performed.
So @ste93 simplified the approach and added a little bit of filtering to the FindPeaks
function (actually there is already a prefilter that we could retune), and now the step length waveform follows the person much better 👍 I'll show a video of it ASAP.
However, what needs to be ironed out now is the error wrt the ground truth increasing as the step length increases. This happens in the skeletonRetriever
too. But playing around with the parameters might be a solution.
An update regarding the past days:
Although very simple scenarios considered successful (single steps, perfectly placed in front of the robot), the evaluated metric gets less reliable when the person walks normally, by putting one foot in front of the other. We did some tests and I extracted the positions of the ankles. the test was the following:
This the resulting skeleton, as seen by the skeletonViewer. The grid SHOULD be spaced 0.1x0.1m, but I need to verify that in a static scenario. The interesting part of the video starts at 00:40
By reading the position of the ankles, I get the following plots.
Some comments:
This the resulting skeleton, as seen by the skeletonViewer. The grid SHOULD be spaced 0.1x0.1m, but I need to verify that in a static scenario.
I just verified it as a sanity check: what we see in the SkeletonViewer is subjects in the space expressed in meters. If i place myself statically 3m from the robot, i can see in the virtual world that I am 3squares distant from the robot. Therefore the grid it's 1mx1m, and we can use a smaller resolution, like 0.1mx0.1m to further check the step distances in the skeletonViewer.
Since there might be a nonlinear mapping from the realsense and the real world, it could help to do this sanity check also at different increasing distances.
If there's a need for a look-up table to compensate for the nonlinearities, @randaz81 suggests that we could use the LIDAR to do the remap automatically online.
Extensive tests in the past week highlighted the necessity to use the Realsense D450 mounted on top of the robot head, since the one inside has a range of only 3meters. The main culprit of the inexact metrics was indeed the lower-range camera.
Additionally, it was useful to project the evaluated metric (for example the step length) on the skeleton planes: sagittal, coronal, and transverse. This is now possible, although we need to be careful about the orientation of the person wrt the robot.
See the video below. It seems that the more the person is in front of the robot, the more the metrics are reliable. Which should be consistent with openpose, since a person standing up on the side would have half of the body occluded wrt the camera.
In the scope red is the step length and green is the step width:
As you can see when walking towards the robot, the metrics are accurate.
I think for the TUG demo we could keep the scenario of the person walking towards the robot, so that the whole pipeline tests can resume.
Latest trials revealed that it could be more robust to measure the step distance of the subject as the norm of the cartesian distance between the ankles. In this way, we could mitigate the metric's sensitivity wrt the subject's orientation. See the video here:
And the related PR: https://github.com/robotology/assistive-rehab/pull/323
Next, I will see if it's possible to change the metric by sending commands to the motion analyzer with selectMetricProp
.
Additionally, yesterday we identified the horizontal bands issue plaguing the Realsense depth data: it was the selected 424x240 resolution, which was accepted by the device nonetheless. We resorted to using 640x480, which was perfectly fine.
On thursday we launched the demo successfully after implementing the many fixes in the PRs:
Though we will check again if the the event collection properly records all the agreed events.
Nice!
It seems though that when @ste93 turned around to reach back the seat the system got paused somehow.
Also, not sure about the following points:
Did you notice that too?
looks great! I also had issues understanding the plot...
Nice!
It seems though that when @ste93 turned around to reach back the seat the system got paused somehow.
Also, not sure about the following points:
1. Is the graph on the left showing the number of steps correctly?
It shows the step length which might need a little more filtering for better highlighting of the step profiles.
2. It looks to me that the video on the right (bottom) is definitely off-sync with the real stream.
Yes the video in the yarpview is very laggy, though the measured framerate is stable at 10FPS. We will investigate it. It might be caused by OpenPose that takes most of the video memory.
Today we ran the demo again, and successfully uploaded the json file to the online platform:
The column "Event name" is hardcoded in the python script, we shall change it to something more generic and user-specific.
A little bugfix PR was opened after the tests: https://github.com/robotology/assistive-rehab/pull/330
- It looks to me that the video on the right (bottom) is definitely off-sync with the real stream.
With @elandini84 I noticed that it was caused by dragging the window from one screen to the other. Most likely an Ubuntu or Qt issue with high refresh rate screens, definitely not yarp.
This video is another demo in which we reduced the distance between robot and subject to 1.8mt, to handle the small space of the arena. It should be clearer, and we could show something like this to the next monthly meeting.
Nice! The demo runs much more smoothly 🎉
The step detection performs better when @elandini84 goes back to the chair. We can clearly see the steps and the gait is fluid. Instead, when he reaches the finish line in the first part, his gait turns out to be a bit wobbling because R1 is moving backward slowly. As a result, the recognized steps do not look as nice as in the second stint.
Probably, this can be solved by tuning the human-robot distance slightly more and/or increasing the robot speed.
Better off closing this longstanding issue now that we have a quite solid bunch of components to go in favor of narrower tasks.
cc @mfussi66
This issue tracks the status of the tests of the TUG demo on the
R1SN003
robot.cc @mfussi66 @pattacini @elandini84 @ste93 @lornat75