Open bacaron opened 2 years ago
I think the issue with that may be that it entirely depends on the size of the input tract. Even if the classification structure / individual tract you are concerned with is very small, the function still loads up the entire tract at some point (the new trx format may be able to get around this). In any case, this could cause issues with large tractograms.
In a previous issue, I discussed the potential for dynamic scaling of memory / node requests but I believe that this was seen as unworkable at the time.
So in short, I guess it's up to whoever wants to make the change, but it could have impact on particular users/use-cases.
Hey,
I'm noticing that the ppn=8 request is causing que times to be quite long for this app. I'm wondering, is that number of threads necessary? Is there anything within the app taking advantage?
If so, please feel free to close this issue. Otherwise, could we potentially cut down from 8 to 4?
Thanks, Brad