sct-pipeline / csa-atrophy

Evaluate the sensitivity of atrophy detection with SCT
https://csa-atrophy.readthedocs.io/
MIT License
1 stars 0 forks source link

Possible transfo file corruption due to parallel writing? #50

Closed PaulBautin closed 4 years ago

PaulBautin commented 4 years ago

I am not able to reproduce issue #49. I have implemented suggestion from issue #45.

DONE:

FIX #49 , FIX #45

jcohenadad commented 4 years ago

@PaulBautin i'm launching another processing on compute canada from your branch to see if it solves it

PaulBautin commented 4 years ago

Ok, i am looking into https://github.com/neuropoly/spinalcordtoolbox/issues/2859

jcohenadad commented 4 years ago

@PaulBautin Processed finished, here is the zipped log, i let you look into it: log_results_csa_t1_20200822.zip

jcohenadad commented 4 years ago

@PaulBautin it seems like the one-transfo-per-subject strategy solved the problem-- can you confirm by looking at the log files? where do the multiple err.* files come from?

PaulBautin commented 4 years ago

For 31 out of 85 subjects error is due to issue https://github.com/neuropoly/spinalcordtoolbox/issues/2859. Other subjects have an unidentified error in log.

A surprising fact: In all non err* files i have looked at, the transfo values were already present for each rescaling. Files with unidentified err* transfo values were often missing

@jcohenadad did you re-run the process on an already existing results directory? Maybe the output "results file" name was not overwritten?

PaulBautin commented 4 years ago

@jcohenadad, i think we can merge this PR. it seems like the one-transfo-per-subject strategy solved the problem. I also think we should relaunch compute canada process from scratch (new results directory) it is not faster to use already created results directory.

jcohenadad commented 4 years ago

@PaulBautin conflicts ^