@DavidStirling noted that cancelling the raw2ometiff step in NGFF-Converter (by calling interrupt() on the thread that runs raw2ometiff) doesn't result in the thread actually stopping until PyramidFromDirectoryWriter.initialize() finishes. For large HCS datasets, this can take a while. I was able to reproduce this behavior.
With this change included in a build of NGFF-Converter, cancelling the raw2ometiff step during initialization should now happen much more quickly. The logs should include a line indicating that InterruptedException was thrown.
This feels like a not-great solution and like I'm missing something obvious, although this approach is at least similar to what com.glencoesoftware.pyramid.LimitedQueue already does. I don't have any better ideas at the moment, but happy to hear other thoughts (@sbesson / @chris-allan / @kkoz?)
@DavidStirling noted that cancelling the raw2ometiff step in NGFF-Converter (by calling
interrupt()
on the thread that runs raw2ometiff) doesn't result in the thread actually stopping untilPyramidFromDirectoryWriter.initialize()
finishes. For large HCS datasets, this can take a while. I was able to reproduce this behavior.With this change included in a build of NGFF-Converter, cancelling the raw2ometiff step during initialization should now happen much more quickly. The logs should include a line indicating that
InterruptedException
was thrown.This feels like a not-great solution and like I'm missing something obvious, although this approach is at least similar to what
com.glencoesoftware.pyramid.LimitedQueue
already does. I don't have any better ideas at the moment, but happy to hear other thoughts (@sbesson / @chris-allan / @kkoz?)