niciBume / Cat_Prey_Analyzer

Cat Prey Image-Classification with deeplearning
MIT License
142 stars 22 forks source link

OutOfMemoryError #31

Open MIG1989 opened 1 month ago

MIG1989 commented 1 month ago

Hey,

unfortunately i have the following error since a few days (before it worked fine). Do you know such problems and what to do?

Haar_time: 0.31 No Face Found... Total Runtime: 1.4570460319519043 Runtime: 1.4571459293365479 Timestamp at Done Runtime: 2024_09_30_07-00-48.825246 Overhead: 32.533883 CUMULUS: 0 Working the Queque with len: 44 2024_09_30_07-00-17.222103 Traceback (most recent call last): File "cascade.py", line 919, in sq_cascade.queque_handler() File "cascade.py", line 357, in queque_handler self.queque_worker() File "cascade.py", line 246, in queque_worker cascade_obj = self.feed(target_img=self.main_deque[self.fps_offset][1], img_name=self.main_deque[self.fps_offset][0])[1] File "cascade.py", line 437, in feed single_cascade = self.base_cascade.do_single_cascade(event_img_object=target_event_obj) File "cascade.py", line 496, in do_single_cascade dk_bool, cat_bool, bbs_target_img, pred_cc_bb_full, cc_inference_time = self.do_cc_mobile_stage(cc_target_img=cc_target_img) File "cascade.py", line 644, in do_cc_mobile_stage pred_cc_bb_full, cat_bool, inference_time = self.cc_mobile_stage.do_cc(target_img=cc_target_img) File "/home/pi/CatPreyAnalyzer/model_stages.py", line 98, in do_cc preprocessed_img = self.resize_img(input_img=target_img) File "/home/pi/CatPreyAnalyzer/model_stages.py", line 93, in resize_img img = cv2.cvtColor(input_img, cv2.COLOR_BGR2RGB) cv2.error: OpenCV(4.1.0) /home/pi/opencv-python/opencv/modules/core/src/alloc.cpp:55: error: (-4:Insufficient memory) Failed to allocate 15116544 bytes in function 'OutOfMemoryError'

niciBume commented 1 month ago

I guess you ran out of memory? Is rebooting solving the problem?

MIG1989 commented 1 month ago

this isnt a problem of disk space (had a 256gb sd card) and only 4% are in use. here is the df -h result:

Dateisystem Größe Benutzt Verf. Verw% Eingehängt auf /dev/root 235G 7,3G 218G 4% / devtmpfs 3,7G 0 3,7G 0% /dev tmpfs 3,9G 0 3,9G 0% /dev/shm tmpfs 3,9G 8,6M 3,9G 1% /run tmpfs 5,0M 4,0K 5,0M 1% /run/lock tmpfs 3,9G 0 3,9G 0% /sys/fs/cgroup /dev/mmcblk0p1 253M 49M 204M 20% /boot tmpfs 785M 4,0K 785M 1% /run/user/1000

is this a RAM problem?

niciBume commented 1 month ago

Could be. Is rebooting solving the problem?

MIG1989 commented 1 month ago

restarting the script (catCam_starter.sh) is solving the problem, but i have this error every 4-5 cat detections. And had to manually solve the problem, so when im not at home this is a problem.

Any ideas what I can do?

MIG1989 commented 1 month ago

I had another problem with the crontab. the script is not executed after startup. if i start the catCam_starter.sh the following problem is shown.

Executing CatPreyAnalyzer 2024-10-01 10:36:13.877922: E tensorflow/core/platform/hadoop/hadoop_file_system.cc:132] HadoopFileSystem load error: libhdfs.so: cannot open shared object file: No such file or directory Traceback (most recent call last): File "cascade.py", line 20, in from CatPreyAnalyzer.model_stages import PC_Stage, FF_Stage, Eye_Stage, Haar_Stage, CC_MobileNet_Stage File "/home/pi/CatPreyAnalyzer/model_stages.py", line 6, in from object_detection.utils import label_map_util ModuleNotFoundError: No module named 'object_detection'

niciBume commented 1 month ago

Are you doing something different then usual with the crontab? As it was working before?

In terms of the original error appearing every 4-5 detections, monitor the RAM usage with htop, is a different process hogging it? Was there a significant OS update?

MIG1989 commented 3 weeks ago

i lowerd the resolution of the picture and no it works fine.

crontab was never working did not had the time too look for it. also tryed all the other different solutions to start an script after booting, nothing worked. Always the same problem