robotology / assistive-rehab

Assistive and Rehabilitative Robotics
https://robotology.github.io/assistive-rehab/doc/mkdocs/site
BSD 3-Clause "New" or "Revised" License
20 stars 11 forks source link

Setup and test the TUG demo on R1SN003 #306

Closed vvasco closed 1 year ago

vvasco commented 2 years ago

This issue tracks the status of the tests of the TUG demo on the R1SN003 robot.

cc @mfussi66 @pattacini @elandini84 @ste93 @lornat75

vvasco commented 2 years ago

I report here the instructions to run the demo using docker. The current development is on the branch feat/docker. Once merged, I will add the instructions on the official website.

To run the demo:

Note: the connections with the compose take some time. They can be checked in the yarpmanager, by refreshing the application.

Current issues:

vvasco commented 2 years ago

Yesterday we tried to setup the demo on the robot with @mfussi66, @elandini84 and @ste93.

We had some issues in detecting the start-line using a camera resolution of 424x240px. The estimation in the 2D image was always wrong (the line and the robot were placed in approximately the same position as the tests done in July, i.e. line entirely visible including the white part and robot neck pitch at ~ 22 degrees):

When using the resolution of 640x480px the estimation was very good instead:

With this resolution I had many issues in the past with yarpOpenPose being too slow, so I would avoid it.

I tried again in the late afternoon with 424x240px and the line estimation was good, using the maximum neck pitch (~28 degrees) and placing the robot farther from the line than usual.

Maybe this was due the combination of the camera / robot position and the changed light conditions? Maybe the color of the floor affects the detection? I don't get what's changed compared to the tests done in July though.

I did some more tests using the standalone realsense (D415) at a resolution of 424x240px, placing the camera in different positions and changing the line position to vary the light conditions. The results were always good, here some examples:

We will do another round of tests on Monday morning on the robot and verify this again. Maybe we just need to move the robot neck further down than usual.

pattacini commented 2 years ago

I've created the label https://github.com/robotology/assistive-rehab/labels/%F0%9F%8E%93%20prj-etapas for the tasks aimed at the project.

lornat75 commented 2 years ago

about using 640x480 and interaction with opepose, maybe you can downside the images again when feeding them to openpose?

vvasco commented 2 years ago

about using 640x480 and interaction with opepose, maybe you can downside the images again when feeding them to openpose?

I think this could be a valid option, @ste93 also proposed the same. I would give a second chance to the lower resolution anyway and if this does not work go with this alternative.

mfussi66 commented 2 years ago

Upon suggestion of @ste93, we are now confident that the wifi button shows delay because the network that it connects to does not have internet access, therefore the robot must be connected to the internet to lower the delay to 1-3sec (5sec max). This is most likely caused by the manufacturer software.

vvasco commented 2 years ago

With @mfussi66 @ste93 @elandini84 we ran few tests in the last days. This is the list of issues we encountered and a description of how we solved / bypassed:

We finally managed to run several times the demo successfully. This is a video showing the functionality (very bad quality but it's just to show the current status):

https://user-images.githubusercontent.com/9716288/190361504-ad5eb2d3-b0ce-4b89-8dd1-85b805d2dd43.mp4

Note: @mfussi66 raised the hand from the beginning to have the robot focusing on him rather than on iCub3 (which is also detected as a skeleton)

I'll open the PRs for fix/etapas-demo and feat/docker and update the instructions on the website.

Open points:

pattacini commented 2 years ago

Nice progress 👍🏻

A few comments below.

sometimes we still had issues when detecting the lines. Relaunching lineDetector resulted in a good estimate without changing the relative position line / robot

I think this still represents a problem to solve. If it depends on the resolution of the camera somehow (to be clarified) then we should definitely consider increasing the resolution and/or decide which camera to pick (remember that we need to account for the correct transformation if we go with the top-head camera). In this respect, this further comment holds.

Also, the camera goes along with the quality of the data we can estimate that is relevant to the experiment. I believe this will be tested in the upcoming days.

as per https://github.com/robotology/assistive-rehab/issues/306#issuecomment-1246971409, we can use the wifi button with an acceptable latency only if the robot is connected to internet. We connected it through ethernet cable.

I was told that the latency may still reach 3 seconds in some conditions, which is kind of slow.

connecting the robot to internet through ethernet cable is not the best when the robot navigates. @ste93 told us that they used to have a dedicated mobile phone connected to IIT wifi and the robot connected in tethering through usb-c. The phone was placed on the robot through a phone armband

This could be a good alternative indeed.

pattacini commented 2 years ago

We also need to address the problem with the wheels' grip material: https://github.com/icub-tech-iit/tickets/issues/2355 (⚠️ private link).

mfussi66 commented 2 years ago

We also need to address the problem with the wheels' grip material: icub-tech-iit/tickets#2355 (⚠️ private link).

We arrived yesterday in the robot arena and the grip was glued back on (not sure who did it), it seemed quite sturdy and reliable.

pattacini commented 2 years ago

We arrived yesterday in the robot arena and the grip was glued back on (not sure who did it), it seemed quite sturdy and reliable.

Maybe @fbiggi?

Speaking with @maggia80 it came out that we could think of redoing the wheels with a proper gripping footprint. Don't know how long would it take though.

vvasco commented 2 years ago

sometimes we still had issues when detecting the lines. Relaunching lineDetector resulted in a good estimate without changing the relative position line / robot

I think this still represents a problem to solve. If it depends on the resolution of the camera somehow (to be clarified) then we should definitely consider increasing the resolution and/or decide which camera to pick (remember that we need to account for the correct transformation if we go with the top-head camera). In this respect, this further comment holds.

I agree this has to be fixed. This happened with both cameras (the one on the top of the head and the one in the robot's head) at low resolution. It's very weird to me that lineDetector was not able to estimate the line and after relaunching the module the estimate was good, given that the position of the robot and the line did not change and the light conditions were the same. Probably we can go for this option then.

I was told that the latency may still reach 3 seconds in some conditions, which is kind of slow.

Yes sometimes this also happened, and it used to happen also in the past. I think the latency gets worse when the battery charge starts decreasing. Before running the tests we could make sure that the button is completely charged.

pattacini commented 2 years ago

t's very weird to me that lineDetector was not able to estimate the line and after relaunching the module the estimate was good, given that the position of the robot and the line did not change and the light conditions were the same.

Some "memory effect"? Meaning that throughout operations we run code branches that we don't visit at startup, modifying somehow the logic of the check, slightly but decisively sometimes? If so, it's not even said that increasing the resolution will be resolutive. Just speculating.

Yes sometimes this also happened, and it used to happen also in the past.

👍🏻

I think the latency gets worse when the battery charge starts decreasing. Before running the tests we could make sure that the button is completely charged.

Ok, so we could monitor this behavior by trying to do the experiments always with a good charge level. We can also buy a second button, just in case. What do you think?

randaz81 commented 2 years ago

they used to have a dedicated mobile phone connected to IIT wifi and the robot connected in tethering through usb-c. The phone was placed on the robot through a phone armband It's possible, of course, but do you have issues using R1's standard router on a desk? Here is the connection diagram:

ROBOT ---- wifi ------ router ------ internet
                          |---------- laptop1 (no wifi)
                          |---------- laptop2 (no wifi)
                         etc....
vvasco commented 1 year ago

t's very weird to me that lineDetector was not able to estimate the line and after relaunching the module the estimate was good, given that the position of the robot and the line did not change and the light conditions were the same.

Some "memory effect"? Meaning that throughout operations we run code branches that we don't visit at startup, modifying somehow the logic of the check, slightly but decisively sometimes? If so, it's not even said that increasing the resolution will be resolutive. Just speculating.

I need to investigate on this. So far at higher resolution I've never seen the same issue, but it's also true that I ran less tests.

Yes sometimes this also happened, and it used to happen also in the past.

👍🏻

I think the latency gets worse when the battery charge starts decreasing. Before running the tests we could make sure that the button is completely charged.

Ok, so we could monitor this behavior by trying to do the experiments always with a good charge level. We can also buy a second button, just in case. What do you think?

Definitely. I have another mystrom button (I bought it long time ago for testing). We can configure it on the robot wifi and alternate it with the other.

they used to have a dedicated mobile phone connected to IIT wifi and the robot connected in tethering through usb-c. The phone was placed on the robot through a phone armband It's possible, of course, but do you have issues using R1's standard router on a desk? Here is the connection diagram:

ROBOT ---- wifi ------ router ------ internet
                         |---------- laptop1 (no wifi)
                         |---------- laptop2 (no wifi)
                        etc....

thanks @randaz81, that would be another possibility

pattacini commented 1 year ago

I have another mystrom button (I bought it long time ago for testing). We can configure it on the robot wifi and alternate it with the other.

Cool! Let's do it 👍🏻

lornat75 commented 1 year ago

about the microphones as you probably know on the 5gtours project we used an external mic mounted on the head of R1, it worked ok in our experiments, provided people spoke with the correct timing and within the microphone "field of view". We have been also testing other OEM microphones for better integration but I think this may still need some time.

lornat75 commented 1 year ago

nice progress though!

pattacini commented 1 year ago

Hi @mfussi66

At the end of the day, could you make a summary of the current status?

mfussi66 commented 1 year ago

Today in the morning we tackled the motionAnalyzer: we started off by tweaking the setup of the Yarp modules to save an stabilize the bandwidith as much as possible, to make sure that the skeleton data would be reliably received. However, the step length analysis is still not reliable. We will check if changing the skeletonRetriever parameters can help, but we are having doubts about the reliability of the step length computation.

pattacini commented 1 year ago

However, the step length analysis is still not reliable. We will check if changing the skeletonRetriever parameters can help, but we are having doubts about the reliability of the step length computation.

A couple of related resources:

pattacini commented 1 year ago

It may be convenient to record a data stream to recreate the skeleton on the same data offline and analyze what's happening in detail, with approaches similar to those in #178 and #279.

The effect of tuning of the skeletonRetriever's parameters could be judged offline too.

vvasco commented 1 year ago

For the record, these are the parameters we have used in simulation for the paper:

skeletonRetriever --camera::fov "(54 42)" --camera::remote /SIM_CER_ROBOT/depthCamera --depth::kernel-size 3 --depth::iterations 1 --depth::min-distance 0.5 --depth::max-distance 10.0 --filter-keypoint-order 5.0

pattacini commented 1 year ago

These params are for simulation; don't we have analogous ones for the real TUG experiments that we conducted in the kitchen?

Be careful as filter-keypoint-order is int32 so we better ought to specify --filter-keypoint-order 5 to avoid incurring into the "conversion hell".

vvasco commented 1 year ago

I mentioned the parameters used in simulation because we benchmarked the metrics on that scenario. For the real TUG experiments in the kitchen I used these values, but it was with a different camera, we might need some fine-tuning.

mfussi66 commented 1 year ago

Here is a dataset dump that can be used to replay the yarpopenpose and depth data, so offline processing can be performed:

https://github.com/robotology/assistive-rehab-storage/tree/openpose (Edited link)

I created a branch (https://github.com/robotology/assistive-rehab/tree/feat/replayer) in which I added a docker-compose-replay, that launches only the components for post-processing, mainly yarpDataPlayer and motionAnalyzer.

Instructions for posterity on how to use docker-compose-replay:

  1. open the application AssistiveRehab-replay on the Yarpmanager
  2. launch the compose with docker-compose -f docker-compose-replay.yml up -> the compose will launch a container with a yarprun --server /replayer
  3. on the YarpManager launch skeletonRetriever
  4. load the dataset on the yarpDataplayer
  5. make all the connections in the YarpManager
  6. Press play on the Yarpdataplayer
  7. open a terminal with yarp rpc /motionAnalyzer/cmd and launch the commands
    1. loadExercise tug
    2. selectMetric step_0
    3. selectMetricProp step_length
    4. selectSkel
    5. start false
    6. then stop in the end
pattacini commented 1 year ago

Nice work @mfussi66 👍🏻 My only point is that it'd be much better to post the dataset on https://github.com/robotology/assistive-rehab-storage (LFS enabled).

mfussi66 commented 1 year ago

To recap the past days:

  1. On Friday, I inspected the skeletonRetriever skeleton data structure that is outputted towards the skeletonViewer, and noticed that the computed ankles distance is compatible with the expected value, while taking a known step. E.g., if the step I took was 50cm, the skeletonRetriever would return ankle poses that would result in a ~44cm step. Besides the 10% discrepancy, the results were consistent when taking other steps.

  2. Therefore, today me and @ste93 took a look at the motionAnalyzer. We did not really want to touch the code, but we could not really understand why some specific operations of the step length estimation were performed. So @ste93 simplified the approach and added a little bit of filtering to the FindPeaks function (actually there is already a prefilter that we could retune), and now the step length waveform follows the person much better 👍 I'll show a video of it ASAP.

However, what needs to be ironed out now is the error wrt the ground truth increasing as the step length increases. This happens in the skeletonRetriever too. But playing around with the parameters might be a solution.

mfussi66 commented 1 year ago

An update regarding the past days:

Although very simple scenarios considered successful (single steps, perfectly placed in front of the robot), the evaluated metric gets less reliable when the person walks normally, by putting one foot in front of the other. We did some tests and I extracted the positions of the ankles. the test was the following:

  1. 30cm steps moving from right to left (wrt the robot)
  2. 30cm steps moving from left to right
  3. repeat the two above with 60cm steps

This the resulting skeleton, as seen by the skeletonViewer. The grid SHOULD be spaced 0.1x0.1m, but I need to verify that in a static scenario. The interesting part of the video starts at 00:40

https://user-images.githubusercontent.com/38140169/196751918-b647f7d6-6fd0-42e7-bba4-f9acc838842c.mp4

By reading the position of the ankles, I get the following plots.

etp

Some comments:

mfussi66 commented 1 year ago

This the resulting skeleton, as seen by the skeletonViewer. The grid SHOULD be spaced 0.1x0.1m, but I need to verify that in a static scenario.

I just verified it as a sanity check: what we see in the SkeletonViewer is subjects in the space expressed in meters. If i place myself statically 3m from the robot, i can see in the virtual world that I am 3squares distant from the robot. Therefore the grid it's 1mx1m, and we can use a smaller resolution, like 0.1mx0.1m to further check the step distances in the skeletonViewer.

pattacini commented 1 year ago

Since there might be a nonlinear mapping from the realsense and the real world, it could help to do this sanity check also at different increasing distances.

If there's a need for a look-up table to compensate for the nonlinearities, @randaz81 suggests that we could use the LIDAR to do the remap automatically online.

mfussi66 commented 1 year ago

Extensive tests in the past week highlighted the necessity to use the Realsense D450 mounted on top of the robot head, since the one inside has a range of only 3meters. The main culprit of the inexact metrics was indeed the lower-range camera.

Additionally, it was useful to project the evaluated metric (for example the step length) on the skeleton planes: sagittal, coronal, and transverse. This is now possible, although we need to be careful about the orientation of the person wrt the robot.

See the video below. It seems that the more the person is in front of the robot, the more the metrics are reliable. Which should be consistent with openpose, since a person standing up on the side would have half of the body occluded wrt the camera.

In the scope red is the step length and green is the step width:

https://user-images.githubusercontent.com/38140169/200041417-85a52a56-ce42-4ede-9c45-23e426abe932.mp4

As you can see when walking towards the robot, the metrics are accurate.

I think for the TUG demo we could keep the scenario of the person walking towards the robot, so that the whole pipeline tests can resume.

mfussi66 commented 1 year ago

Latest trials revealed that it could be more robust to measure the step distance of the subject as the norm of the cartesian distance between the ankles. In this way, we could mitigate the metric's sensitivity wrt the subject's orientation. See the video here:

https://user-images.githubusercontent.com/38140169/204008489-c919c66a-26e6-4cb2-87b9-8d51cc3e6d70.mp4

And the related PR: https://github.com/robotology/assistive-rehab/pull/323

Next, I will see if it's possible to change the metric by sending commands to the motion analyzer with selectMetricProp.

Additionally, yesterday we identified the horizontal bands issue plaguing the Realsense depth data: it was the selected 424x240 resolution, which was accepted by the device nonetheless. We resorted to using 640x480, which was perfectly fine.

mfussi66 commented 1 year ago

On thursday we launched the demo successfully after implementing the many fixes in the PRs:

https://user-images.githubusercontent.com/38140169/205602167-4b4269af-621f-450e-b725-b9db5c0b009e.mp4

Though we will check again if the the event collection properly records all the agreed events.

pattacini commented 1 year ago

Nice!

It seems though that when @ste93 turned around to reach back the seat the system got paused somehow.

Also, not sure about the following points:

  1. Is the graph on the left showing the number of steps correctly?
  2. It looks to me that the video on the right (bottom) is definitely off-sync with the real stream.

Did you notice that too?

lornat75 commented 1 year ago

looks great! I also had issues understanding the plot...

mfussi66 commented 1 year ago

Nice!

It seems though that when @ste93 turned around to reach back the seat the system got paused somehow.

Also, not sure about the following points:

1. Is the graph on the left showing the number of steps correctly?

It shows the step length which might need a little more filtering for better highlighting of the step profiles.

2. It looks to me that the video on the right (bottom) is definitely off-sync with the real stream.

Yes the video in the yarpview is very laggy, though the measured framerate is stable at 10FPS. We will investigate it. It might be caused by OpenPose that takes most of the video memory.

mfussi66 commented 1 year ago

Today we ran the demo again, and successfully uploaded the json file to the online platform:

image

The column "Event name" is hardcoded in the python script, we shall change it to something more generic and user-specific.

A little bugfix PR was opened after the tests: https://github.com/robotology/assistive-rehab/pull/330

  1. It looks to me that the video on the right (bottom) is definitely off-sync with the real stream.

With @elandini84 I noticed that it was caused by dragging the window from one screen to the other. Most likely an Ubuntu or Qt issue with high refresh rate screens, definitely not yarp.

mfussi66 commented 1 year ago

This video is another demo in which we reduced the distance between robot and subject to 1.8mt, to handle the small space of the arena. It should be clearer, and we could show something like this to the next monthly meeting.

https://user-images.githubusercontent.com/38140169/208148026-49e5b011-fff8-42d7-a6de-43b3b21fa8f3.mp4

pattacini commented 1 year ago

Nice! The demo runs much more smoothly 🎉

The step detection performs better when @elandini84 goes back to the chair. We can clearly see the steps and the gait is fluid. Instead, when he reaches the finish line in the first part, his gait turns out to be a bit wobbling because R1 is moving backward slowly. As a result, the recognized steps do not look as nice as in the second stint.

Probably, this can be solved by tuning the human-robot distance slightly more and/or increasing the robot speed.

pattacini commented 1 year ago

Better off closing this longstanding issue now that we have a quite solid bunch of components to go in favor of narrower tasks.

cc @mfussi66