umyelab / LabGym

Quantify user-defined behaviors.
GNU General Public License v3.0
66 stars 6 forks source link

analyzing single behavior #144

Closed vzimmern closed 5 months ago

vzimmern commented 5 months ago

Good evening,

I'm trying to train a categorizer for only a single behavior. In other words, I want the final annotated video to only identify this single behavior if it is present -- and nothing else. I don't need every mouse behavior to be identified correctly -- just the unique behavior (i.e. myoclonus -- which is a twitch or jerk).

I was able to go through all the steps until the analysis module, where the following error happens:

Screenshot from 2024-04-18 17-39-28

Any help would be appreciated.

yujiahu415 commented 5 months ago

You need to have at least 2 categories of behaviors to train a Categorizer and analyzing the behaviors. All non-target behaviors can be sorted into a ‘background’ behavior category. And I’m just curious it seemed you already started to analyze the behaviors. So did you train a Categorizer with only one behavior category? It’s supposed to output an error message when training a Categorizer with only one behavior category. You didn’t see that error message?

vzimmern commented 5 months ago

Yes, I did get an error but I ignored it and went straight to the analysis stage, which explains the error message. I followed your advice and created a 'myoclonus' and a 'background' category. This a superior way to go for me, so thanks for the advice.

Now, I have a question about the detector. I used Roboflow on ~300 representative images of my mice with good contour tracing to make a detector, but when applying it to the analysis stage, I'm getting a lot of frames in which the mouse is not entirely contour-traced. See examples below. Any recommendations on how to improve the contouring of the mouse? Should I add more images to Roboflow? Does any of this have to do with the STD? I used an STD of 0.

Screenshot from 2024-04-19 15-03-58 Screenshot from 2024-04-19 15-05-07 Screenshot from 2024-04-19 15-05-31 Screenshot from 2024-04-19 15-05-48

Thanks SO MUCH for your help.

yujiahu415 commented 5 months ago

This indicates the Detector was not trained well. And it’s not related to STD value. STD only relates the behavior categorization accuracy. 300 images should be sufficient to train a good Detector in your case. So I think the training might be inappropriately performed. Can you let me know the settings of the Detector training? And would it be possible to share with me your roboflow dataset so I can take a look and provide suggestions?

vzimmern commented 5 months ago

I'm not sure how best to share the Detector training, but here's the weblink: https://app.roboflow.com/labgymannotation/cstb-ko-bedding/3

I can get you the details for the detector on Monday (tomorrow).

vzimmern commented 5 months ago

The detector inferencing framesize is the default 480. The iteration number is the default 200.

yujiahu415 commented 5 months ago

Hi,

There are many settings in your Roboflow that can be changed to improve the training:

  1. Make all the images as "training". You don't need "testing" and "validation" when training a Detector.
  2. In many of the annotated images, when the tail and the body are separated, you annotated them as two "mouse". This will confuse the Detector. There's only one mouse in each image, so you don't annotate the tail as a separate "mouse", instead, just annotate the body and make sure in every image, there's only ONE annotated "mouse".
  3. When you make the annotated images into a dataset, Roboflow recommends you to do preprocessing of the images, like resize to 640 X 640. This might change the resolution of your images and change the width X height ratio, which will harm the training. So you need to SKIP ANY of the preprocessing steps in Roboflow.
  4. Roboflow provide augmentation on your images, which can increase the amount and diversity of your training images. So you NEED to add some augmentation methods. For example, if the mice can be in different locations in the frames, you can include "flipping" or "rotation"; if the illumination in your videos may vary across videos, you can include "brightness" or "exposure".

When training the Detector, the inferencing framesize can be set to the larger value of the height or width of your videos to analyze, for example, if your the framesize of your videos to analyze is 640 X 480, set the inferencing framesize to 640. And set the iteration number to 5,000.

After making above changes, the accuracy of the Detector will be significantly improved. But if not, please let me know.

vzimmern commented 5 months ago

I implemented all these changes and the results are fantastic!! Thank you so much!!

yujiahu415 commented 5 months ago

Glad to hear that! Thanks for the update!