Closed arpgh closed 9 years ago
The out of memory issue was caused by adding a lot of processes to the Pool
while parallelizing dump patches in this commit:
Following changes were made so that dump patches would be fast but not cause out-of-memory issues:
CurationManager.py
VideoReader.py
objects to save frames in parallel and extract patches from themNeed also to parallelize CurationManager.aggregateCurationPatches()
for the curation to be parallelized throughout.
Prallelized. Merged to development.
Note: Previously, we were re-writing self.curationBboxes[classId]
data structure after aggregateCurationPatches
. We no longer do that anymore because of parallelization. While that data structure is only used by aggregateCurationPatches
and only used once, in the future, care should be taken if modifying that global data structure. In such a case, instead of passing global data, pass in through function calls.
Branch: Development Video: wc14-NetCos-xtraTm.mkv Path: gpu2:/mnt/data/wc14cls/training/0seed-cleaned/vdo-test/wc14-NetCos-xtraTm/ Log with error: runlog-dump_patches_for_curation.txt Json folder used: json Error: