Closed che85 closed 7 years ago
@fedorov i tested automatic preop prostate segmentation which took around 5 minutes on my machine. I am not sure why it takes that long. In my opinion that's far too long when I compare that to the intraop segmentation. I am already using the "fastest" parameter of DeepInfer:
Setting a specific domain speeds up the segmentation by 1-2 minutes, where I would have to implement a further step for first displaying the preop image to the user prompting if endorectal coil was used or not.
@mehrtash the output label does not occupy the same physical space as the input image which causes problems by running the bias correction afterwards in SliceTracker. We are currently using the prostate segmentation label as a mask.
N4ITK MRI Bias correction standard error:
itk::ExceptionObject (0x104702d88)
Location: "unknown"
File: /Users/kitware/Dashboards/Nightly/Slicer-0-build/ITKv4/Modules/Core/Common/include/itkImageToImageFilter.hxx
Line: 241
Description: itk::ERROR: N4BiasFieldCorrectionImageFilter(0x104500c70): Inputs do not occupy the same physical space!
InputImage Origin: [-7.7010222e+01, -4.1919011e+01, -2.1569269e+01], InputImage_1 Origin: [-7.7005454e+01, -4.1224100e+01, -2.6082526e+01]
Tolerance: 1.0936000e-06
InputImage Spacing: [1.0936000e+00, 1.0936000e+00, 1.3999852e+01], InputImage_1 Spacing: [1.0936000e+00, 1.0936000e+00, 1.0936000e+00]
Tolerance: 1.0936000e-06
I might just go and resample the label map after running the automatic segmentation. But still 5 minutes sounds pretty long for me.
I agree, 5 minutes is too long. We can probably do it manually in under 2 minutes.
Asking user to confirm e-coil was used is ok if it helps.
@che85 @fedorov For me fast segmentation of the preop takes around 20 seconds on a 2015 macbook pro. There might be something wrong with your setup. If you want the same spacing you should check off the smoothing option.
@che85 Also for domain select with_erc instead of automatic.
@mehrtash What setup could possible be wrong on my mac? You are right. When using DeepInfer module directly, it works pretty fast. Can you identify anything wrong in the following code?
def _runDocker(self):
logic = DeepInfer.DeepInferLogic()
parameters = DeepInfer.ModelParameters()
with open(os.path.join(DeepInfer.JSON_LOCAL_DIR, "ProstateSegmenter.json"), "r") as fp:
j = json.load(fp, object_pairs_hook=OrderedDict)
iodict = parameters.create_iodict(j)
dockerName, modelName, dataPath = parameters.create_model_info(j)
logging.debug(iodict)
inputs = {
'InputVolume' : self.inputVolume
}
outputLabel = slicer.vtkMRMLLabelMapVolumeNode()
outputLabel.SetName(self.inputVolume.GetName()+"-label")
slicer.mrmlScene.AddNode(outputLabel)
outputs = {'OutputLabel': outputLabel}
params = dict()
params['Domain'] = 'Automatic'
params['OutputSmoothing'] = 1
params['ProcessingType'] = 'Fast'
params['InferenceType'] = 'Single'
params['verbose'] = 1
logic.executeDocker(dockerName, modelName, dataPath, iodict, inputs, params)
if logic.abort:
return None
logic.updateOutput(iodict, outputs)
if self.colorNode:
displayNode = outputLabel.GetDisplayNode()
displayNode.SetAndObserveColorNodeID(self.colorNode.GetID())
return outputLabel
@che85 could you please update your local prostate-segmenter json file and the docker image? I made some changes last week and update the model and I suspect maybe this is due to the older version of the model/docker-image. I can help you set it up on your computer if it helps. I tried calling the segmenter through the python interactor (using the gist script I sent you earlier) which is the same as your code and it again processed very fast.
But why would it run as expected in DeepInfer itself?
Also, what's that status column for?
I have never seen it displaying anything. It would be nice to see there a status "updated... your version of this model is out of date" or something similar.
@che85 You are right! It is a missing feature. Unfortunately, you have to do it manually right now.
@che85 I don't have any idea that it runs ok on the module. Have you compared the cmd prints? is it the same if you assign the domain to bwh with erc instead of automatic?
I am downloading the new docker image now.
please download deepinfer/prostate instead of deepinfer/deepinfer. you can remove deepinfer/deepinfer from your machine
@mehrtash if daemon is not running, there is only a warning given in the Slicer error console without giving any feedback to the user. This text is not even visible from the python console.
Warning: failed to get default registry endpoint from daemon (Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?). Using system default: https://index.docker.io/v1/
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
@mehrtash is there any automation that does download the prostate segmenter docker once deepinfer is installed from the extension manager? I am asking because it would be complicated to use if the user first has to go to DeepInfer for downloading the "right" docker image.
@mehrtash I am going to create some issues on DeepInfer whenever I notice something
@che85 yes, please create issues on deepinfer
Slicetracker needs 3-5 minutes, Deepinfer 20-30 seconds
@Kmacneil0102 is also testing on Windows and it takes about the same on SliceTracker
docker run command:
----------------------------------------------------------------------------------------------------
['/usr/local/bin/docker', 'run', '-t', '-v', u'/Users/christian/.deepinfer/.tmp:/data', u'deepinfer/prostate', u'--InputVolume', u'/data/InputVolume.nrrd', u'--OutputLabel', u'/data/OutputLabel.nrrd', '--ModelName', u'prostate-segmenter', u'--InferenceType', 'Single', u'--Domain', 'BWH_WITH_ERC', u'--OutputSmoothing', u'--ProcessingType', 'Fast']
model: prostate-segmenter
Using TensorFlow backend.
starting deployment and inference...
Model type BWH WITH ERC (PREOP) selected.
2017-08-23 16:13:40.656829: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.
2017-08-23 16:13:40.656872: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
2017-08-23 16:13:40.656880: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
2017-08-23 16:13:40.656884: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations.
2017-08-23 16:13:40.656888: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations.
1 vote(s) from 1 models required to consider as prostate label.
onApply
prerun...
docker run command:
----------------------------------------------------------------------------------------------------
[u'/usr/local/bin/docker', 'run', '-t', '-v', u'/Users/christian/.deepinfer/.tmp:/data', u'deepinfer/prostate', u'--InputVolume', u'/data/InputVolume.nrrd', u'--OutputLabel', u'/data/OutputLabel.nrrd', '--ModelName', u'prostate-segmenter', u'--InferenceType', u'Single', u'--Domain', u'BWH_WITH_ERC', u'--OutputSmoothing', u'--ProcessingType', u'Fast']
model: prostate-segmenter
Using TensorFlow backend.
starting deployment and inference...
Model type BWH WITH ERC (PREOP) selected.
2017-08-23 16:21:22.286422: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.
2017-08-23 16:21:22.286463: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
2017-08-23 16:21:22.286471: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
2017-08-23 16:21:22.286475: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations.
2017-08-23 16:21:22.286479: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations.
1 vote(s) from 1 models required to consider as prostate label.
@mehrtash Do you have any idea what could cause that problem?
One difference I see is prerun...
in the log.
I just checked. DeepInfer is doing nothing iterating of a list prerun_callbacks
which is empty. So it's doing nothing while prerun is executed.
@che85 have you updated the your docker image and json files?
Yes I did. See below
{
"name": "Prostate Gland Segmenter",
"number_of_inputs": 1,
"task": "Segmentation.",
"organ": "Prostate",
"modality": "MRI",
"train_test_data_details": "The model is trained on pelvic MRI scans of a 3T machine without use of an Endorectal Coil.",
"briefdescription": "Whole-gland prostate segmentation in pelvic Axial T2-W MRI scans.",
"detaileddescription": "",
"website": "",
"citation":"",
"version": "0.1",
"docker":
{
"dockerhub_repository": "deepinfer/prostate",
"digest": "sha256:74177bd528e4dfe3e4dd5d5ca18f3580715245fe6e0c79b47d7b9fc15c227fef",
"size": "6.77 GB"
}
,
"model_name": "prostate-segmenter",
"data_path": "/data",
"members": [
{
"name" : "Domain",
"type" :"enum",
"enum" : ["BWH_WITH_ERC", "BWH_WITHOUT_ERC", "PROMISE12"],
"detaileddescriptionSet" : "Select the model that will be used for inference.\n",
"iotype": "parameter"
},
{
"name" : "ProcessingType",
"type" :"enum",
"enum" : [ "Fast","Accurate"],
"detaileddescriptionSet" : "Accurate model uses more models to compute the segmentation while the fast model only uses one model.\n",
"iotype": "parameter"
},
{
"name" : "InferenceType",
"type" :"enum",
"enum" : [ "Single", "Ensemble"],
"detaileddescriptionSet" : "Accurate model uses more models to compute the segmentation while the fast model only uses one model.\n",
"iotype": "parameter"
},
{
"name": "InputVolume",
"type": "volume",
"iotype": "input",
"voltype": "ScalarVolume",
"detaileddescriptionSet" : "Axial T2-W Prostate MRI.\n",
"default": "std::vector<unsigned int>(3, 64)",
"itk_type": "typename FilterType::SizeType"
},
{
"name": "OutputLabel",
"type": "volume",
"iotype": "output",
"voltype": "LabelMap",
"detaileddescriptionSet" : "Output labelmap for the segmentation results.\n",
"default": "std::vector<unsigned int>(3, 64)",
"itk_type": "typename FilterType::SizeType"
},
{
"name": "OutputSmoothing",
"type": "bool",
"default" : "false",
"iotype": "parameter"
}
]
}
I added a new type of messagebox that takes as parameters, name of sliceWidget to mimic and text
I implemented this because once a messagebox is open, the user does not have any chance to scroll through the slices to see if there was an endorectal coil used or not.
@mehrtash Could you please get back to us regarding this issue? I have no idea why the preop segmentation is that slow.
@che85 Unfortunately, I cannot reproduce the problem. I tried it on both my desktop and laptop. Slicer-DeepInfer is just creating the docker run command and executes it using subprocess, nothing more. So if the two commands are identical I cannot understand why it takes more time with one. very strange! I added a verbose option to the prostate-segmenter json file, so we can see what is going on. You don't have to update your docker image. Just go to deepinfer module, connect to the cloud model registry, select the prostate gland segmenter and download, select yes to the prompt popup (it will only update your local json). Then restart slicer. Make sure to include params['verbose'] = 1 in your script and select the checkbox if you're running it through gui. Please run both experiments (gui and slicetrakcer) and give me the output of the python interactor.
docker run command:
----------------------------------------------------------------------------------------------------
['/usr/local/bin/docker', 'run', '-t', '-v', u'/Users/christian/.deepinfer/.tmp:/data', u'deepinfer/prostate', u'--InputVolume', u'/data/InputVolume.nrrd', u'--OutputLabel', u'/data/OutputLabel.nrrd', '--ModelName', u'prostate-segmenter', u'--InferenceType', 'Single', u'--Domain', 'BWH_WITH_ERC', u'--OutputSmoothing', u'--verbose', u'--ProcessingType', 'Fast']
model: prostate-segmenter
Using TensorFlow backend.
starting deployment and inference...
-------------------------------------- INPUT AND OUTPUT PATHS --------------------------------------
input volume path: /data/InputVolume.nrrd
output label path: /data/OutputLabel.nrrd
----------------------------------------- INPUT IMAGE INFO -----------------------------------------
image spacing: (0.2734, 0.27339999999999987, 3.499963045120239)
image size: (512, 512, 28)
image pixel type: 16-bit signed integer
image normal fit: mu : 459.09, sigma: 360.51
Model type BWH WITH ERC (PREOP) selected.
Domain "BWH PREOP" selected. I will proceed with the specific model for this domain.
now running model with uid: 2017_08_13_14_36_46
creating segmenter ...
starting segmentation...
2017-08-25 19:17:04.623177: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.
2017-08-25 19:17:04.623242: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
2017-08-25 19:17:04.623252: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
2017-08-25 19:17:04.623261: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations.
2017-08-25 19:17:04.623269: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations.
ensembling...
1 vote(s) from 1 models required to consider as prostate label.
--------------------------------------- SMOOTHING THE OUTPUT ---------------------------------------
downloading: https://raw.githubusercontent.com/DeepInfer/Model-Registry/master/CTBoneSegmenter.json...
downloading: https://raw.githubusercontent.com/DeepInfer/Model-Registry/master/NeedleFinder-cpu.json...
downloading: https://raw.githubusercontent.com/DeepInfer/Model-Registry/master/NeedleFinder-gpu.json...
downloading: https://raw.githubusercontent.com/DeepInfer/Model-Registry/master/ProstateNeedleFinder.json...
downloading: https://raw.githubusercontent.com/DeepInfer/Model-Registry/master/ProstateSegmenter.json...
downloading: https://raw.githubusercontent.com/DeepInfer/Model-Registry/master/ProstateSegmenterLegacy.json...
onApply
prerun...
docker run command:
----------------------------------------------------------------------------------------------------
['/usr/local/bin/docker', 'run', '-t', '-v', u'/Users/christian/.deepinfer/.tmp:/data', u'deepinfer/prostate', u'--InputVolume', u'/data/InputVolume.nrrd', u'--OutputLabel', u'/data/OutputLabel.nrrd', '--ModelName', u'prostate-segmenter', u'--InferenceType', u'Single', u'--Domain', u'BWH_WITH_ERC', u'--OutputSmoothing', u'--verbose', u'--ProcessingType', u'Fast']
model: prostate-segmenter
Using TensorFlow backend.
starting deployment and inference...
-------------------------------------- INPUT AND OUTPUT PATHS --------------------------------------
input volume path: /data/InputVolume.nrrd
output label path: /data/OutputLabel.nrrd
----------------------------------------- INPUT IMAGE INFO -----------------------------------------
image spacing: (0.2734, 0.27339999999999987, 3.499963045120239)
image size: (512, 512, 28)
image pixel type: 16-bit signed integer
image normal fit: mu : 459.09, sigma: 360.51
Model type BWH WITH ERC (PREOP) selected.
Domain "BWH PREOP" selected. I will proceed with the specific model for this domain.
now running model with uid: 2017_08_13_14_36_46
creating segmenter ...
starting segmentation...
2017-08-25 21:52:01.185323: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.
2017-08-25 21:52:01.185967: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
2017-08-25 21:52:01.185990: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
2017-08-25 21:52:01.186002: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations.
2017-08-25 21:52:01.186012: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations.
ensembling...
1 vote(s) from 1 models required to consider as prostate label.
--------------------------------------- SMOOTHING THE OUTPUT ---------------------------------------
@che85 Thank you for running the experiment. If there is still a difference in execution time, I have no idea what is going on. By looking at the verbose output, I can say that there is nothing wrong on the docker side (both are calling the same model with uid 2017_08_13_14_36_46 for the inference which is the light-weight fast model). So the problem is on slicer side, but I don't know how to catch it since I cannot reproduce it.
I suggest you two sit together to look into it, I can join too, after @che85 is back from vacation in 2 weeks or so.
I merged this PR since there is nothing wrong the way DeepInfer is used. Need to investigate later into that.
fixes #311
The following logic is executed:
iterate through seriesmap and read xml tag 'seriesDescription' with 'AX' and 'T2' which would be the right image volume knowing that the segmentation should be saved for the same image volume.
if no 'Segmentations' directory is available, but image volume and targets were successfully loaded AND deep learning is activated in SliceTracker settings, then notify user and ask if automatic segmentation should be executed.
depends on https://github.com/QIICR/SlicerDevelopmentToolbox/pull/20