Closed mzur closed 7 years ago
Once the issue is resolved, restart the failed jobs in the queue.
The delphiGather
script works fine now but delphiApply
crashes with a core dump. We have to check delphiApply
with output from the new delphiGather
manually.
Note: A core dump on Solaris equals a MemoryError
on a normal OS.
Since the volume where the LP detection failed has a very large number of manually set laser points the distance matrix of the delphiApply script gets huge. We have to make this more memory efficient.
Continued in #15 as this is a separate issue.
We've got a bug in the
delphiGather
script, probably a memory issue because there are too many images with too many manually annotated laserpoints. The error says:Our best guess is that
lps
andlpnegativ
get too large. Memory for the images read intolpimg
is freed as it should.Here is an example input file that can reproduce the issue. If the delphiGather script is called with this file, the memory comsumption increases and increases until it is exhausted and the script crashes. Command on the cluster:
python -i delphiGather.py input.txt
Note: The script didn't actually crash in my tests due to memory exhaustion because the cluster has so much memory. I killed it once it consumed ~15 GB of memory. Top on
wayne
with 528 GB of RAM: