smistad / FAST

A framework for high-performance medical image processing, neural network inference and visualization
https://fast.eriksmistad.no
BSD 2-Clause "Simplified" License
433 stars 101 forks source link

batch generation pipeline support #158

Closed andreped closed 2 years ago

andreped commented 2 years ago

Tested with runPipeline method using patch-wise classifier in FastPathology using Win10.

Seems to do what it should. I observe that patches are being rendered in sets, after a batch is finished processing. Increasing the max-batch-size makes this more apparent.

However, I did not observe that memory usage (GPU) increased that much, perhaps not any. Is inference performed per-patch or on batch-level, when passing output from ImageToBatchGenerator PO to NeuralNetwork PO?

Also note my comment below. I had to remove the mInputConnections.clear() to make the PO work in a pipeline (see here). Is this correct? What is the reason it was done in the first place?

Also note that there was something wrong in the condition here. You probably want to throw an exception if m_maxBatchSize == -1 and not 1. Supporting m_maxBatchSize = 1 also means that this PO can be used for batch size = 1 as well, which means that I can "always" (?) use this with PatchGenerator, which was not the case before using TensorRT, AFAIK.

andreped commented 2 years ago

Is this unit test being run when building? If so, that would be great as I would know if this works cross-platform quite easily.

However, I believe the FAST build might crash for me, as I do not have access to some of the precompiled binaries which FAST depends on. Or maybe you have fixed this now?