Shouldn't the patch size obtained by volume be 161616? The default for init.py is 25625616. When I tried to train my own model to segment the airway with CT images, the loss was consistently above 0.9. I tried to adjust the size of patch, but the loss was still not reduced. Are there any other parameters I need to modify?
Shouldn't the patch size obtained by volume be 161616? The default for init.py is 25625616. When I tried to train my own model to segment the airway with CT images, the loss was consistently above 0.9. I tried to adjust the size of patch, but the loss was still not reduced. Are there any other parameters I need to modify?