sollan / alma

Fully automated (rodent) limb motion analysis toolbox for behavioral analysis with bodypart coordinate data, building upon markerless pose estimation.
GNU General Public License v3.0
13 stars 5 forks source link

Question #32

Closed cwlee909 closed 1 year ago

cwlee909 commented 1 year ago

Dear developer,

Hi! I want to use the spontaneous walking function. I am considering that how can we label both left and right at a time. Since on our set, we could only view a side at a time when mice walk. As well, I want to know your body part definition. For example, how can you define the crest, knee, hip? Is there is any reference?

Sincerely, Gary C. Lee

sollan commented 1 year ago

Hi! Depending on your set-up, you can either label left and right bodyparts separately (and train the DLC model to predict these), e.g. "toeL" / "toeR". With this first option the app will identify the direction of walking on its own. Or, you could have the mice always walk from one side to the other and label the bodyparts as is, e.g., "toe". There's a setting in configs.yaml for the one-directional walking direction.

For body part definition, I don't have any definition at hand (the manual labelling was done by my colleagues who are experts in mouse gait kinematics; and body part labelling is required even without an automated analysis pipeline after all). Maybe you could find references in literature on mouse gait kinematics.

Hope this helps!

cwlee909 commented 1 year ago

Dear Developer,

First of all, thanks for your quick reply! I further consider about others question.

Do we need to choose the continuing walking video, or we should exclude stop-and-go video (spontaneous walk model)? (For example, cut the stop-and-go clip or just manually select out the stop-and-go video to train the machine.) Based on the article, we should choose continual walking clips; however, we do treatment and should observe the gait during a short time windows, that would be hard for us to get a really continual walking clip. Especially for the start of the gait and the end of the gait in each video. It would take a lot of time for us to clip out all the stop-and-go part of one single video. Will it influence the DLC or ALMA working?

Sincerely, Gary C. Lee

sollan commented 1 year ago

Hello again! If I understand correctly you mean the mouse stops often during spontaneous walking. In this case you don't have to crop out the video clips that contain walking (in fact, cutting out the "stopping" part might lead to inaccurate results, since the analysis could consider the stop-and-go to be continuous step cycles). Cropping out useless parts (such as when animal is not visible) can make the computation process more efficient though, both for DLC and ALMA.

For DLC model training (separate from our app, which is intended for analysis of DLC output), you would simply feed videos (needn't be cropped either) and DLC will select relevant frames with K-means clustering for you to label.

A problem that might arise with videos that include a lot of stop-and-go might be, that certain parameters such as step cycle duration / everything related to stance phase / DTW of continuous strides would be inaccurate, since they could be confused with "no walking" periods. So these step cycles with abnormal parameters might be considered outliers in the analysis and filtered out from the results. If you could get enough data, then considering only the continuous step cycles (after you get the analysis results) would be a good idea.

cwlee909 commented 1 year ago

Dear Developer,

Thanks for your time for replying to me again! I am so appreciating you got my question (I am not proficiency in English. xdd)

I fully understand how this wonderful opensource could help my research!

Again, thanks for your generosity and helping!

Sincerely, Gary C. Lee