YttriLab / B-SOID

Behavioral segmentation of open field in DeepLabCut, or B-SOID ("B-side"), is a pipeline that pairs unsupervised pattern recognition with supervised classification to achieve fast predictions of behaviors that are not predefined by users.
GNU General Public License v3.0
190 stars 54 forks source link

Question about bodyparts with top-down view #18

Closed shinhs0506 closed 3 years ago

shinhs0506 commented 4 years ago

Hi, great software! thank you for making it available

I know a similar question has been asked before, but since things might have changed, I just wanted to make sure.

I have a top-down camera view setup, so paws are invisible for the most of the time. But reading the doc, """ BODYPARTS = { 'Snout/Head': 0, 'Neck': None, 'Forepaw/Shoulder1': 1, 'Forepaw/Shoulder2': 2, 'Bodycenter': None, 'Hindpaw/Hip1': 3, 'Hindpaw/Hip2': 4, 'Tailbase': 5, 'Tailroot': None } """ it seems shoulders and hip can be used instead of the paws? If so, how well does B-soid classify behaviours without them? the main behaviours I want to classify are locomotion, rearing, grooming, and stationary exploration

runninghsus commented 3 years ago

Hi @shinhs0506

Apologies for the delayed response! Yes, shoulders and hips can be used. I've also had success with just snout, body centroid, and tail-base. I see that you are using the bsoid_py version. Might i recommend you use the bsoid_app where you can select the body parts you want.

In terms of specifics, rearing and grooming has been slightly hard to differentiate without the paws if one were to run top-down, but it is still possible. You might be able to think of movements of a few body parts that is differentiable for those behaviors. With the app, you can utilize those body parts.

Hope this answers your question. If it does, feel free to close this issue.