keenon / AddBiomechanics

A tool to automatically process and share biomechanics data
https://addbiomechanics.org/
Other
31 stars 5 forks source link

Dynamics fitting skipped on trials with GRFs #306

Open nickbianco opened 1 month ago

nickbianco commented 1 month ago

Currently, in the dev deployment, many trials uploaded with ground reaction forces are skipping the dynamics fitting process.

A few examples subjects from the Uhlrich et al. (2022) OpenCap dataset:

Each subject has multiple trials (all with uploaded GRFs), and more than half of them have "Not run" under "Force status". I'm not seeing a way to access logs, so I'm not sure how or why AddBiomechanics is deciding to skip dynamics fitting in these trials. The equivalent versions of these subjects on prod do not have as many trials skipped (e.g., subject10).

Note that these trials were uploaded using the CLI, and I'm not sure if that adds in any complicating factors. I'm currently running subject10 with hand-uploaded data here to see if I get the same results.

nickbianco commented 1 month ago

I've been running into this same issue while working with the Rajagopal et al. 2015 data while re-adding Moco support. The new GRF thresholder is much more aggressive in regions where only one foot is on a force plate, resulting in many more time points being marked as "extendedToNearestPeakForce". Then, if a trial has fewer than 50 "good" GRF frames, dynamics fitting will be skipped.

keenon commented 4 weeks ago

Yeah, this is the unfortunate tradeoff before we get a really good automatic metric. In order to get high recall (in this case, "100% recall" being defined as removing all frames where there really are missing GRFs from the dynamics fit, which is necessary to get accurate solves) then we have to accept lower precision (in this case, "100% precision" would mean that we didn't accidentally remove any good frames). The shorter trials like those in Uhlrich, which really do have a lot of steps off of plates in their overground data, are particularly hard to automatically label. Our current automated labeler, when evaluated against all 3M+ manually annotated frames from the AddB dataset paper, gets 99.6% recall and 81.5% precision. That's pretty good, for a set of human-understandable heuristics, but definitely is just the placeholder for a neural model eventually.

I think in this case, until we have a much more accurate ML model for detecting missing GRF frames, we just have to update the UI to tell the users to manually flag the frames with GRFs and re-run, if they're unhappy with the results of the automated heuristics, which are high recall but low precision.

keenon commented 4 weeks ago

I realized that the current iteration of the code doesn't respect the manual labels, so I've changed that here:

https://github.com/keenon/AddBiomechanics/pull/311

Then we've still gotta figure out how to make the UI a bit friendlier for labeling the present/absent GRF frames.

nickbianco commented 4 weeks ago

Thanks for the clarification, @keenon. I think it's a very reasonable tradeoff, we should just make sure that users are clear about trials are being filtered with these changes. Something on the FAQ page on the website paired with an announcement would probably be sufficient. Updates to the UI with warnings like "hey, you might have noticed that some trials with forces are not being processing, here's why..." would be good too.