YttriLab / A-SOID

An active learning platform for expert-guided, data efficient discovery of behavior.
Other
55 stars 7 forks source link

Question about multi animal experiments with n > 2 #77

Closed sommerc closed 7 months ago

sommerc commented 8 months ago

Dear A-SOID devs,

I am searching a way to analyze differences in social behavior in mice. A-soid looks really promising! However, our experiments typically involve 4 mice...

From the structure of the A-Soid annotation file, I assume that currently only frame-wise annotations are supported. Is this correct?

Cheers, Chris

JensBlack commented 8 months ago

Hi Chris,

thanks for reaching out. Yes, the algorithm and pipeline are built to predict frame-wise (exclusive) events as you said.

We haven't tested more than 2 animals/humans yet but the algorithm should be able to pick up group behaviors (e.g. of swarms) just fine. I assume that you are interested in a solution that can predict sub-group or even individual behaviors at the same time. This is not what A-SOiD is built for, however, as this solution is quite fast and label-effficient it might be worth a try to use a workaround:

A-SOiD extracts movement and position related features from pose estimation, given multiple animals it will always calculate the inter-animal features as well.

But this can be circumvented by excluding all animals that are not of interest.

a) If you are interested in individual behavior, you could split your individual animals into seperate pose files with their corresponding behavior label. Train the algorithm with all animals (as seperate files) to get a generalized classifier for individual behaviors. In your later prediction you can split all animals again - i.e., like a human-made bounding box, you focus your classifier on one animal at a time.

b) if you are interested in sub-group behaviors, identify the animals that are actively participating in that behavior (e.g., two monkeys grooming each other) and do the same as above. This might be trickier in the final prediction, but you could do a pair-wise pose file for all combinations (given my example) and then classify each seperately.

c) Given enough samples, the algorithm should also be able to ignore features (of other animals behaving), which would allow you to train multiple classifiers with the same pose files, but seperate behavior labels. Here the biggest challenge will be that several combinations of features (of different animals or pairs of animals) will result in the same label, so that the algorithm has a lot of work to do, but this is the beauty of our approach - it is very data-efficient.

Let me know if this has helped and if you are trying those solutions I am happy to hear about your results.

best, Jens

sommerc commented 8 months ago

Hi Jens,

thanks for your quick reply and your suggenstions, very much appretiated!

We are mostly interested in the following pair-wise behaviors:

options b) might be the way to go for us. Since these behaviros are all pair-wise. However, generating the training data for scenarios a) and b) would probably require some novel, track-aware annotation tool. Do you happen to know sth I could built upon? For pose-estimation we are using sleap.

A-SOiD extracts movement and position related features from pose estimation, given multiple animals it will always calculate the inter-animal features as well.

This is done pair-wise, I assume so features grow quadratically with n?

Cheers, Chris

JensBlack commented 8 months ago

Yes, it's done pair-wise.

However, generating the training data for scenarios a) and b) would probably require some novel, track-aware annotation tool.

Can't you specify the subject in BORIS and then export seperated by subject? I haven't used this feature yet, so take my advice with a grain of salt.

If you find another annotation tool, let me know and we can integrate their output into A-SOiD.