Closed Tianyulied closed 9 months ago
Hi Tianyulied,
To better understand the cause of this issue, I need more information from you.
There are several steps in analyzing a video, which are 'extracting background', 'estimating animal size', 'acquiring information in each frame', 'categorizing behaviors', 'quantifying behaviors' .... Can you let me know how long it took for each step? You may simply screenshot your cmd prompt / terminal after an analysis since the time for each step is documented.
What is the fps of your video(s)? And what is the frame size of your video(s) to analyze? What is the input frame / image size of the Categorizer and what is its complex level?
The error message of tensorflow might be because the batch size for categorizing behaviors is too big. This can be easily fixed but first I need to know the answers of above questions to see whether the slow processing speed occurred in this step.
Thanks!
Hi yujiahu415, Thanks for your reply
2023-04-13 09:24:05.348265: W tensorflow/tsl/framework/cpu_allocator_impl.cc:83] Allocation of 15335424000 exceeds 10% of free system memory.
2023-04-13 09:24:06.814114: W tensorflow/tsl/framework/cpu_allocator_impl.cc:83] Allocation of 15335424000 exceeds 10% of free system memory.
thank you very much!!
Thank you for the information! I think the issue was very likely due to the big batch size for categorizing behaviors. The duration of animations for your behaviors are large (120 frames) so the memory consumption was huge. I changed the batch size in LabGym v1.8 in order to trade off memory for processing speed but seemed this strategy did not work well. Anyway, I have just updated a new version v1.8.1 and reverse the change. Can you try it and let me know whether or not the new version fixes this issue? Thanks! You can use 'pip install --upgrade LabGym' or 'python3 -m pip install --upgrade LabGym' to get the latest version.
By the way, if you want to increase the processing speed (independent of the tensorflow error), you may try the followings:
Thanks for your advice and great job!!!! Now I have analyzed my video.And the software report to me a satisfactory result. But the console report an error about this: 2023-04-15 13:18:35.936582 Video annotation completed! Exporting the raster plot for this analysis batch... 2023-04-15 13:22:30.219829 The raster plot stored in: G:\labgym test\m8-20221229-social\result Traceback (most recent call last):
File "D:\Anaconda\envs\labgym\lib\site-packages\LabGym\gui_analyzers.py", line 693, in analyze_behaviors all_summary=pd.concat(all_summary,ignore_index=True)
File "D:\Anaconda\envs\labgym\lib\site-packages\pandas\util_decorators.py", line 311, in wrapper return func(*args, **kwargs)
File "D:\Anaconda\envs\labgym\lib\site-packages\pandas\core\reshape\concat.py", line 347, in concat op = _Concatenator(
File "D:\Anaconda\envs\labgym\lib\site-packages\pandas\core\reshape\concat.py", line 404, in init raise ValueError("No objects to concatenate")
ValueError: No objects to concatenate
can you tell me why this error happened? And is it have some influnce on my result data ?
Thank you for the feedbacks! As for the new issue, it might be because you didn't selected any behavior parameter for analysis so it couldn't make the outputs of the parameters. This issue will not affect your analysis. But I have fixed it now. Please upgrade LabGym again to the v1.8.2 version using 'pip install --upgrade LabGym' or 'python3 -m pip install --upgrade LabGym', and let me know if you still encounter this error message. Thanks again!
During the analayze (categorize behavior), my syder console report an error about "W tensorflow/tsl/framework/cpu_allocator_impl.cc:83] Allocation of 15335424000 exceeds 10% of free system memory.". Can you explain for me why it report this error. my device properties: Processor: 12th Gen Intel(R) Core(TM) i5-12600K 3.70 GHz installed RAM: 128G(128GB usable)
And, during the analyze, the progress is very slow. It take about a half hour for analyze 1 min video. How can I make the progress faster? Thank you!!!