umyelab / LabGym

Quantify user-defined behaviors.
GNU General Public License v3.0
58 stars 5 forks source link

NoneType error when trying to generate behavior examples #125

Closed apenemo closed 3 months ago

apenemo commented 3 months ago

Hi, I am currently trying to generate behavior examples of the same video with different parameters to compare the results.

Here is what I have tried :

1) "interactive advanced” mode, social distance: 1, duration: 25 frames, interval: 15 frames, not include background, not include body parts ==> Worked perfectly fine

2) "interactive advanced” mode, social distance: 1, duration: 25 frames, interval: 15 frames, not include background, include body parts, STD==50 ==> led me to the following error, multiple time, with different videos. I have only picked videos with only one individual for now, I am wondering if it is what is actually causing problem ?

Here is the error :

[03/21 16:11:06 d2.checkpoint.detection_checkpoint]: [DetectionCheckpointer] Loading from C:\Users\admin-labgym\anaconda3\envs\detectron_env\Lib\site-packages\LabGym\detectors\Big dataset 32\model_final.pth ... Video fps: 24 The original video framesize: 1080 X 1920 The resized video framesize: 540 X 960 Preparation completed! Generating behavior examples... 2024-03-21 16:11:06.569531 Traceback (most recent call last): File "C:\Users\admin-labgym\anaconda3\envs\detectron_env\lib\site-packages\LabGym\gui\training\behavior_examples.py", line 947, in generate_data AAD.generate_data_interact_advance( File "C:\Users\admin-labgym\anaconda3\envs\detectron_env\lib\site-packages\LabGym\analyzebehaviorsdetector.py", line 3839, in generate_data_interact_advance pattern_image = generate_patternimage_interact( File "C:\Users\admin-labgym\anaconda3\envs\detectron_env\lib\site-packages\LabGym\tools.py", line 927, in generate_patternimage_interact other_inner = functools.reduce(operator.iconcat, other_inners[n], []) TypeError: 'NoneType' object is not iterable

yujiahu415 commented 3 months ago

This might be a bug. I guess the video had this issue is a video containing only one detected animal but the behavior mode is "interactive advanced". Is that the case?

apenemo commented 3 months ago

Yes exactly.

Should I only use interactive advance mode when there are multiple individuals on the video ? Which lead to my next question : can I train a categorizer with different types of behavior exemple ? For example :

Foraging is not an interactive behavior grooming is interactive advanced

Can I train a categorizer with behavior examples generated in foraging and in grooming, even though they do not belong to the same "kind" of interactive behavior ?

Many thanks

yujiahu415 commented 3 months ago

Yes, currently the Categorizer cannot be used with a mix of different behavior modes. For example, the "interactive advanced" mode cannot be used in videos of single animals. Sorry I hadn't thought about the possibility of training a Categorizer that can be used for both "interactive advanced" and "non-interactive". This could be a feature to implement in future. But for now, probably just exclude the videos of single animals. If you want to train a Categorizer at "interactive advanced" mode for foraging and social grooming, you can select videos that involves multiple individuals in which foraging happened to generate the foraging behavior. Otherwise, you can train two different Categorizers, one at "non-interactive" mode for non-social behaviors like foraging, the other at "interactive advanced" mode for social behaviors like social grooming.

apenemo commented 3 months ago

Hi Yujia

I cannot exclude videos with only one individual since they are a major part of my dataset on one of my studied behaviors. And I would like to have a categorizer able to detect multiple behavior.

Nevertheless, those settings : interactive advanced” mode, social distance: 1, duration: 25 frames, interval: 15 frames, not include background, not include body parts worked as expected, and the generated behaviors seemed fine to me.

I'll generate behavior examples from videos with multiple individuals and I'll send some to you to have your opinion.

If both looks good I think I will stick to those parameters for the generation of behavior examples from now on.

Best,