Closed sea-bass closed 1 year ago
Hi @sea-bass
Thanks for the detailed review I appreciate it. I helped me a lot to go a step further and I ma pleased with the result.
The paper is almost entirely rewritten, but the main changes are:
In terms of the real-time claim.
It is absolutely true that if the real-time would have been the only goal the python implementation does not make sense. The goal of the library is to provide efficient tools for their evaluation in a easy-to-use package. However, looking at the efficiency of the implemented algorithms , even though they are implemented in python, they are in many cases good enough for interactive applications.
Especially in terms of musculoskeletal models, they are usually restricted to off-line use, due to their inherent dimensionality issues (large number of muscles) . The ICHM algorithm implemented in this package is able however to evaluate their polytopes within a 200-300ms which might have taken hours with different methods. So its much closer to real-time than before, but it is far from sub-millisecond that you'd probably expect form "real-time" claims.
However just as an example, we have used this package (both for robots and humans) for the real-time robot control in our lab, we ran the code at the frequency of 50 to 100Hz which is perfectly enough for many use-cases.
Finally, the area of using task-space physical ability metrics (especially polytopes) in real time applications is very small at the moment, mostly due to their computational complexity, so having the real-time keyword in the title allows to reach further and show to the community that it is not as far from the real-time as they might think. :D
@askuric I have the same concern as @sea-bass on the real-time claim. I do agree with your rationale but may be think about replacing "Real-time" with "Fast" or "Efficient" keywords instead.
Thanks @askuric -- took a look at your reworked paper and it looks awesome! Much clearer and gentler introduction to the readers.
My only "real" comment is that it's probably risky for the paper to provide direct links to the performance benchmarking scripts since this might change in the future. I'd just say "provided in the repository" and add instructions in the repo itself for reproducing the benchmarking for both the robot and musculoskeletal models.
Finally, some small editorial comments as I read through. Nothing major:
robotic-toolbox
but I think they call the module roboticstoolbox
. Either use this, or the full name "Robotics Toolbox for Python".opensim
" --> "biomechanics software opensim
"cddlib
and the citation, same with line 69 in pygradientpolytope
and its citation.Hey @sea-bass,
Thanks a lot for your updates, the paper is updated with your comments. I absolutely agree about the script link, it is not the smartest idea in long-run.
The paper is regenerated in the main issue: https://github.com/openjournals/joss-reviews/issues/5670
Let me know if there is anything else you'd like me to do.
LGTM -- thanks for making all these updates!
Hi @ShravanTata, Sorry to rush you, but did you maybe have time to look into the made changes? I'd be happy to hear your feedback. Thanks!
Thank you for submitting this cool toolbox! Here is my feedback on the paper.
General Feedback
~5ms
, I (and readers) would be mostly interested in something like5ms +/- 1ms
, as well as a brief description of how this data was collected (how many samples, etc.) -- see e.g., https://jhavl.github.io/mmc/ for an example.Summary
Statement of Need
Physical Capacity Metrics
biorbd
".https://github.com/openjournals/joss-reviews/issues/5670