Currently I have thousands of gait patient videos from which I would like to extract the foot strike events to show those in sync with the video in Mokka. Therefore I would like to use some kind of learning algorithm to detect those events in the video and extract them to a c3d file using btk.
I wonder whether you have some good references to papers that propose the use of AI to detect these kinds of events in a static gait camera setup. The filmed environment is nearly always the same, except for the patient being filmed. It would be very useful if I could use this historical data to show stride parameters (time between footstrikes).
Hi there,
Currently I have thousands of gait patient videos from which I would like to extract the foot strike events to show those in sync with the video in Mokka. Therefore I would like to use some kind of learning algorithm to detect those events in the video and extract them to a c3d file using btk.
I wonder whether you have some good references to papers that propose the use of AI to detect these kinds of events in a static gait camera setup. The filmed environment is nearly always the same, except for the patient being filmed. It would be very useful if I could use this historical data to show stride parameters (time between footstrikes).