umyelab / LabGym

Quantify user-defined behaviors.
GNU General Public License v3.0
69 stars 7 forks source link

Some question about analyze #20

Closed Tianyulied closed 9 months ago

Tianyulied commented 1 year ago

During the analayze (categorize behavior), my syder console report an error about "W tensorflow/tsl/framework/cpu_allocator_impl.cc:83] Allocation of 15335424000 exceeds 10% of free system memory.". Can you explain for me why it report this error. my device properties: Processor: 12th Gen Intel(R) Core(TM) i5-12600K 3.70 GHz installed RAM: 128G(128GB usable)

And, during the analyze, the progress is very slow. It take about a half hour for analyze 1 min video. How can I make the progress faster? Thank you!!!

yujiahu415 commented 1 year ago

Hi Tianyulied,

To better understand the cause of this issue, I need more information from you.

  1. There are several steps in analyzing a video, which are 'extracting background', 'estimating animal size', 'acquiring information in each frame', 'categorizing behaviors', 'quantifying behaviors' .... Can you let me know how long it took for each step? You may simply screenshot your cmd prompt / terminal after an analysis since the time for each step is documented.

  2. What is the fps of your video(s)? And what is the frame size of your video(s) to analyze? What is the input frame / image size of the Categorizer and what is its complex level?

  3. The error message of tensorflow might be because the batch size for categorizing behaviors is too big. This can be easily fixed but first I need to know the answers of above questions to see whether the slow processing speed occurred in this step.

Thanks!

Tianyulied commented 1 year ago

Hi yujiahu415, Thanks for your reply

  1. here is the parameter: The user interface initialized! Processing video... 2023-04-13 09:10:26.421280 The original video framesize: 720 X 480 Video fps: 29.97002997002997 Folder created: G:\labgym test\1min background\1min result\result\1min Extracting the static background... Background extraction completed! Estimating the animal size... 2023-04-13 09:10:30.569379 Estimation completed! Single animal size: 18551.31256952169 Video processing completed! Acquiring information in each frame... 2023-04-13 09:10:31.988390 1000 frames analyzed... 2023-04-13 09:17:08.516705 Information acquisition completed! Crafting data... 2023-04-13 09:24:02.974554 Completed! Categorizing animal behaviors... 2023-04-13 09:24:02.974554

2023-04-13 09:24:05.348265: W tensorflow/tsl/framework/cpu_allocator_impl.cc:83] Allocation of 15335424000 exceeds 10% of free system memory.

2023-04-13 09:24:06.814114: W tensorflow/tsl/framework/cpu_allocator_impl.cc:83] Allocation of 15335424000 exceeds 10% of free system memory.

  1. the video parameter: length 65 second framewidth 720 frame height 480 frame rate 30frame/second item type avi
  2. input frame / image size of the Categorizer and what is its complex level: classnames,dim_tconv,dim_conv,channel,time_step,network,level_tconv,level_conv,inner_code,std,background_free approach,32,32,1,120,2,2,2,0,150,1 climb,32,32,1,120,2,2,2,0,150,1 locomotion,32,32,1,120,2,2,2,0,150,1 nosepoke,32,32,1,120,2,2,2,0,150,1 stay,32,32,1,120,2,2,2,0,150,1 uncertain,32,32,1,120,2,2,2,0,150,1

thank you very much!!

yujiahu415 commented 1 year ago

Thank you for the information! I think the issue was very likely due to the big batch size for categorizing behaviors. The duration of animations for your behaviors are large (120 frames) so the memory consumption was huge. I changed the batch size in LabGym v1.8 in order to trade off memory for processing speed but seemed this strategy did not work well. Anyway, I have just updated a new version v1.8.1 and reverse the change. Can you try it and let me know whether or not the new version fixes this issue? Thanks! You can use 'pip install --upgrade LabGym' or 'python3 -m pip install --upgrade LabGym' to get the latest version.

yujiahu415 commented 1 year ago

By the way, if you want to increase the processing speed (independent of the tensorflow error), you may try the followings:

  1. Further downsize the frame. I guess the animal occupies a relatively big area in the field of view because the single animal size is 18551 in a 720 X 480 frame (the total area of a frame is 345600). And the input size of Categorizer is 32, which means the animal blob will be resized into 32 X 32 (1024, which is about 1/18 of 18551) to input into the Categorizer. So, there is plenty room for downsizing the frames. I suggest you first try downsizing the frames to half, like 360 X 240. This will significantly increase the processing speed while not reduce the categorizing accuracy.
  2. If possible, reduce the duration of animations. Currently it's 120-frame and can make the 'acquiring information' and 'categorizing behaviors' slow. But this is up to you because you decide the behaviors.
  3. You may try to train a Categorizer with Pattern Recognizer only. This will significantly increase the processing speed.
Tianyulied commented 1 year ago

Thanks for your advice and great job!!!! Now I have analyzed my video.And the software report to me a satisfactory result. But the console report an error about this: 2023-04-15 13:18:35.936582 Video annotation completed! Exporting the raster plot for this analysis batch... 2023-04-15 13:22:30.219829 The raster plot stored in: G:\labgym test\m8-20221229-social\result Traceback (most recent call last):

File "D:\Anaconda\envs\labgym\lib\site-packages\LabGym\gui_analyzers.py", line 693, in analyze_behaviors all_summary=pd.concat(all_summary,ignore_index=True)

File "D:\Anaconda\envs\labgym\lib\site-packages\pandas\util_decorators.py", line 311, in wrapper return func(*args, **kwargs)

File "D:\Anaconda\envs\labgym\lib\site-packages\pandas\core\reshape\concat.py", line 347, in concat op = _Concatenator(

File "D:\Anaconda\envs\labgym\lib\site-packages\pandas\core\reshape\concat.py", line 404, in init raise ValueError("No objects to concatenate")

ValueError: No objects to concatenate

can you tell me why this error happened? And is it have some influnce on my result data ?

yujiahu415 commented 1 year ago

Thank you for the feedbacks! As for the new issue, it might be because you didn't selected any behavior parameter for analysis so it couldn't make the outputs of the parameters. This issue will not affect your analysis. But I have fixed it now. Please upgrade LabGym again to the v1.8.2 version using 'pip install --upgrade LabGym' or 'python3 -m pip install --upgrade LabGym', and let me know if you still encounter this error message. Thanks again!