sgoldenlab / simba

SimBA (Simple Behavioral Analysis), a pipeline and GUI for developing supervised behavioral classifiers
https://simba-uw-tf-dev.readthedocs.io/
GNU General Public License v3.0
273 stars 137 forks source link

Using simba with 4 mice, 8 bodyparts #257

Open Sere-98 opened 1 year ago

Sere-98 commented 1 year ago

Hi, Not really an issue but more of a question. I am planning to use SIMBA to identify behaviors (e.g., chases) in groups of 4 mice. For each mouse, 8 bodyparts have been labeled (according to the labeling scheme that is used in SIMBA). However, since SIMBA is optimized for 2 animals, I guess the features that will be extracted and used for the model (with 4 mice) are much limited compared to the 492 features that can be used for the 2 mice. I was wondering how many features will be used if I use my videos and tracking files with 4 mice, and if it's better that I just do pairwise comparisons instead with all the possible combinations of pairs of mice, so that all features can be used.

sronilsson commented 1 year ago

Hey @Sere-98 ! Yes, when using a user-defined pose-config, SimBA will by default calculate (i) the distance between the animals body-parts and other animals body-parts, (i) the aggregated distance moved in rolling windows, (iii) aggregated distances between animals in rolling windows, and some (iv) counts of pose-probability scores scores in different buckets. It’s not a lot in terms of breadth, but it is kept limited as otherwise blows up when someone come with tons of animals / body-parts: not exactly sure how many features you’ll end up with but maybe 500-600? Default features will likely get you chasing, BUT the tricky part with the use-case is typically directionality: you probably want to know who is chasing who and with 4 individuals you have 12 chasing permutations and it can be too much a chore creating 12 classifiers.

Sere-98 commented 1 year ago

Hi, thank you for your reply! I'm not sure I understood the problem with directionality. Yoou're saying that would be a problem in case I use my user-defined configuration with 4 mice, right? So a solution would be to train on just 2 mice, and then use the videos and associate them to cropped csv files with just 2 mice at a time, which would allow me to detect with just one classifier all chases events. At this point, is there a way to extrapolate the information on which mouse is the chaser, and which is the chased? For example, I train to recognize behaviors as chases with both instances in which mouse1 is chasing mouse2, and mouse2 is chasing mouse1. When I analyze my new videos, can I know in which instances labeled as chases mouse1 is the chaser, or mouse2 is the chaser? Or would I need to create 2 different classifiers for that?

sronilsson commented 1 year ago

yes, the potential is not so much about the user-defined configurations, I guess its more about the behavior (chase) having a direction in general: e.g., you annotate frames as containing "chasing" the classifier will find chasing. If you annotate frames as contains "animal 1 chases animal 2" it will find "animal 1 chases animal 2". So you would need two classifiers for two mice doing chasing. As a solution I wrote a method to "reverse" classifiers so people would only have to annotate one directions, and then reverse the classifier. However, last I heard, a couple of users encounterd bugs with these methods in the GUI, and I have not had any time to maintain it. But just as an FYI that it's doable in case you wan't to type something up yourself.

Sere-98 commented 1 year ago

I see, thank you for the clarification and the quick reply! I will try looking into the "reverse classifier".