This feature will allow R1 to perform and evaluate the Timed Up and Go (TUG) test, in which the patient has to get up from an armchair, walk for 3 meters (identified by a line on the floor), go back and seat again.
The robot, in general, will have to assess in real-time the patient during the test, while navigating and interacting with him/her.
We expect this functionality to help clinicians get quantitative data, that would be difficult to get otherwise (the patient should be equipped with sensors).
To develop this test, we foresee the need for the following features:
Perception
The robot has to recognize the finish line on the floor, used to indicate the end of the path. The line can be highlighted using a specific pattern (for example ArUco markers) or a colored tape.
The robot has to distinguish between the patient's lower limbs and any eventual walking aid required by the patient. Aids can include walking sticks and walkers. If the skeleton detection gets impaired because of the walking aid, the robot can eventually realize that and take countermeasures as for reaching a standpoint from where the patient's skeleton is fully in sight instead.
The robot should give the alarm to the therapist when the patient has fallen onto the floor.
Motion analysis
The robot has to compute the time spent to perform the test as well as the metrics of the lower limbs (such as range of motion, step length, and width, etc...), in order to monitor the patient performance during the test.
The extracted metrics should be stored in an offline report.
It would be great to let the robot detect walking patterns that potentially lead to falls and trigger alarm ahead of time.
Verbal interaction
The robot has to explain the test to the patient, then trigger the start command. The robot needs to be always responsive to patient input, who might want to interact at an early stage by stopping the explanation and asking for repetition.
The robot has to reply to potential questions, e.g.: "at which speed should I move?", "can I use the walking stick?", "how many time should I repeat the test?", "how good am I at doing this?".
The robot has to interact with the patient during the test, for example saying "I'm following you from behind to observe you better".
The robot has to provide verbal feedback to the patient on the quality of the performance.
To trigger the speech-to-text, we may consider the "hand-raised-up" option.
Navigation
Before commencing the test, during the explanation part, the robot can reach for the finish line on the floor to point at it.
The robot can walk next to the patient (from behind if the patient uses a walker so that to optimize the skeleton visibility), all along a straight path and at a safe distance.
Depending on the phase (e.g. explanation, monitoring) and/or on the patient types (e.g. needing walking aids), the robot needs to understand ahead of the test how it will be moving together with the patient.
Introduction
This feature will allow R1 to perform and evaluate the Timed Up and Go (TUG) test, in which the patient has to get up from an armchair, walk for 3 meters (identified by a line on the floor), go back and seat again.
The robot, in general, will have to assess in real-time the patient during the test, while navigating and interacting with him/her.
We expect this functionality to help clinicians get quantitative data, that would be difficult to get otherwise (the patient should be equipped with sensors).
To develop this test, we foresee the need for the following features:
Perception
Motion analysis
Verbal interaction
Navigation