Currently, we are only scaling down in the abdominal organs pipeline. For that one, we need to do it, because the NN will crash with larger inputs. The assumption so far was that for other pipelines we can accept all input sizes.
However, this causes performance issues. For example, the bone segmentation takes a lot longer with large inputs. In a test run, the marching cube step of that pipeline (compNumpy2Obj) took 45 sec.
Therefore, the downscaling component should be included in all pipelines and make sure the input will be scaled down to a maximum size. For the concrete values, we should try different values, and find a sweet spot between performance and quality.
Description
Currently, we are only scaling down in the abdominal organs pipeline. For that one, we need to do it, because the NN will crash with larger inputs. The assumption so far was that for other pipelines we can accept all input sizes.
However, this causes performance issues. For example, the bone segmentation takes a lot longer with large inputs. In a test run, the marching cube step of that pipeline (
compNumpy2Obj
) took 45 sec.Therefore, the downscaling component should be included in all pipelines and make sure the input will be scaled down to a maximum size. For the concrete values, we should try different values, and find a sweet spot between performance and quality.