Closed louisquilley closed 2 years ago
It could be a bug. Unfortunately Alejandro and Marc are in vacation right now.
Which amount of tile-memory do you specify (tile-memory-limit=??). You could try to change/enhance it. I kind of doubt that the # of pixels alone is the problem.
Feel free to put together a test dataset and I can give it a try or then later Alejandro or Marc.
As for the terminal the command "reset" should restore the terminal. Also if you set "progress-bar-disable=1" there wont be a progress bar and the exit should be cleaner.
Thank you for your answer. Indeed, increasing the tile-memory-limit from the default value of 512 to 2048 made things work. I am still a bit puzzled about the memory usage of some images because 5 images of size 8192x8192 are ok with a tile-memory-limit of 512 while for the 2 remaining I had tried 1024 and it still bugged and had to increase to 2048. Is there a way to know how much memory will be needed ?
Well, SE++ should run with only a small amount of RAM, but slow since limited by IO. In this sense @marcschefer will certainly be interested in this case (please keep the knowledge on the problematic dataset and configuration).
It is difficult to say how much RAM for the tile-memory-limit is necessary, it really depends on the data, especially the amount of imaging data (detection and measurement) you run on. If you have it I would always give it 4-8GB and for runs with really lots of pixels (~30GB, run on a server) I give it 30-40 GB if available.
Keep in mind that there is not only the input data but also the many auxiliary images (convoluted detection, thresholded detection, converted weight images, etc.) that the tile-memory-limit needs to hold.
Hi,
Thanks for the report. So, if I understand correctly, a low value for the tile cache causes a crash (likely either an abort or a segfault), but if you increase the cache size it works, right?
Sorry for the late answer (I was in vacation) but you did understand correctly the issue I had.
Hello, I am reopening this issue because I am still having issues with my largest images regarding memory usage. This time, when sourcextractor++ fails it sends a message so it is clearer than last time, but I still don't know how to solve these issues. I am facing two cases :
Thank you for your help, Best regards, Louis
Hi,
I think this should have been fixed with #394. Can you try installing the latest build from develop and give it a try?
The core dump part should be fixed with #394.
@louisquilley please note that fmf_iterations = 0 means the minimizer could not do its job and there is an error code giving the reason for why it was stopping. Technically this is not an error of SE++.
What you can do then is try with the gsl minimizer. I have had cases when gsl could fit objects levmar could not.
Looks like this was solved
Hello, I am currently having an issue when launching isophotal measures on a large number of galaxy images, and this issue only happens with some images of size 8192px x 8192px, but not all of them. The same process works fine for all my images with size < 8192px x 8192px. Sourcextractor++ starts the segmentation and it progresses well, as seen on the green bar filling up at the bottom of the screen. But after 1 or 2 minutes usually, it stops and brings me back to the terminal. I have attached a screenshot of how it looks at the end. Note that the terminal at this moment is bugged : I can't scroll back up and the blinking text cursor disappears.
The output files partition, segmentation and catalogue are created, but both partition and catalogue are empty and segmentation look like the examples below : we see the line where it stopped. There are no filtered/background/thresholded output files created.
Thank you in advance for your help. I remain available for further information about the configuration I used. Louis Quilley