Closed DrraBL closed 2 years ago
Looks like your data is not getting read properly. Make sure the file path is valid.
To better debug, you can try printing the stacktrace inside the except
block.
hi thank you for your reply I solved this issue and i had another one about the nu;ber of slices as said the below error:
Traceback (most recent call last):
File "train.py", line 361, in
Errors may have originated from an input operation. Input Source operations connected to node IteratorGetNext: IteratorV2 (defined at train.py:49)
Original stack trace for 'IteratorGetNext':
File "train.py", line 53, in
each files have it own number of slices so how i can reshape all the files with the same shape?? thank you
Please use resizing.py to pre-process your data.
thank you for your reply . i used the script resizing.py to resize the input data, but no result, I mean the script compiled successfully but the file 's shape don't change and i got the same error when i run train.py the code of the resizing file is below:
''' The code for resizing has been taken from https://gist.github.com/zivy/79d7ee0490faee1156c1277a78e4a4c4 ''' import os import glob import numpy as np import SimpleITK as sitk
def resample(img, new_size, interpolator): dimension = img.GetDimension()
# Physical image size corresponds to the largest physical size in the training set, or any other arbitrary size.
reference_physical_size = np.zeros(dimension)
reference_physical_size[:] = [(sz-1)*spc if sz*spc>mx else mx for sz,spc,mx in zip(img.GetSize(), img.GetSpacing(), reference_physical_size)]
# Create the reference image with a zero origin, identity direction cosine matrix and dimension
reference_origin = np.zeros(dimension)
reference_direction = np.identity(dimension).flatten()
reference_size = new_size
reference_spacing = [ phys_sz/(sz-1) for sz,phys_sz in zip(reference_size, reference_physical_size) ]
reference_image = sitk.Image(reference_size, img.GetPixelIDValue())
reference_image.SetOrigin(reference_origin)
reference_image.SetSpacing(reference_spacing)
reference_image.SetDirection(reference_direction)
# Always use the TransformContinuousIndexToPhysicalPoint to compute an indexed point's physical coordinates as
# this takes into account size, spacing and direction cosines. For the vast majority of images the direction
# cosines are the identity matrix, but when this isn't the case simply multiplying the central index by the
# spacing will not yield the correct coordinates resulting in a long debugging session.
reference_center = np.array(reference_image.TransformContinuousIndexToPhysicalPoint(np.array(reference_image.GetSize())/2.0))
# Transform which maps from the reference_image to the current img with the translation mapping the image
# origins to each other.
transform = sitk.AffineTransform(dimension)
transform.SetMatrix(img.GetDirection())
transform.SetTranslation(np.array(img.GetOrigin()) - reference_origin)
# Modify the transformation to align the centers of the original and reference image instead of their origins.
centering_transform = sitk.TranslationTransform(dimension)
img_center = np.array(img.TransformContinuousIndexToPhysicalPoint(np.array(img.GetSize())/2.0))
centering_transform.SetOffset(np.array(transform.GetInverse().TransformPoint(img_center) - reference_center))
centered_transform = sitk.Transform(transform)
centered_transform.AddTransform(centering_transform)
# Using the linear interpolator as these are intensity images, if there is a need to resample a ground truth
# segmentation then the segmentation image should be resampled using the NearestNeighbor interpolator so that
# no new labels are introduced.
return sitk.Resample(img, reference_image, centered_transform, interpolator, 0.0)
new_size = [144,144,50]
interp = sitk.sitkNearestNeighbor # for labels
interp = sitk.sitkLinear # for input features
for file in sorted(glob.glob('../train-data/Case*_segmentation.mhd')):
file = file.replace('_segmentation', '')
img = sitk.ReadImage(file)
reshaped = resample(img, new_size, interp)
sitk.WriteImage(reshaped, file)
print(file, end='\r')
please let me know and thank in advance
Hello, I have trained and saved the parameters, but I want to know whether the final result of your prediction program is an array? Why not a distorted image? And why my prediction result is an empty array. I hope to get your reply
size the input data, but no result, I mean the script compiled successfully but the file 's shape don't change and i got the same error...
Please check whether the path you are using is correct or not. Try to debug the code.
Also, keep in mind to use interp = sitk.sitkLinear
and file = file.replace('_segmentation', '')
for inputs and interp = sitk.sitkNearestNeighbour
for labels (as mentioned in the file, check comments).
Hello, I have trained and saved the parameters, but I want to know whether the final result of your prediction program is an array? Why not a distorted image? And why my prediction result is an empty array. I hope to get your reply
The output of the file predict.py is a list of arrays (predictions). The list might be empty if the file path is incorrect i.e. no image is getting read. Also, what do you mean by distorted image?
Hi @amanbasu
thank you for your previous replies
so I trained the model and saved the weight, but i don't know if it's done successfully or not. now i am trying to run predict file and i got the following error: Traceback (most recent call last):
File "predict.py", line 266, in
Please if you have any idea about this error let me know otherwise I wanna ask you about the testing stage because i want to measure the dice coefficient for the test dataset? I am looking forward to hear from you.
For the error, you can try restarting the tensorflow session. Perhaps that might help.
HI @amanbasu thank you for your previous replies, I resolved the previous error by changing the tf version, i upgraded it but i have another problem about the compatibility, i tried to do the necessary modification like: import tensorflow.compat.v1 as tf tf.disable_v2_behavior() and about the predict.py file , I changed the following line: iterator = tf.data.Iterator.from_structure(valDataset.output_types, valDataset.output_shapes) to iterator = tf.data.Iterator.from_structure(tf.compat.v1.data.get_output_types(valDataset), tf.compat.v1.data.get_output_types(valDataset)) but I meet this error: TypeError: Dimension value must be integer or None or have an index method, got value 'tf.float32' with type '<class 'tensorflow.python.framework.dtypes.DType'>' I coudn't resolve it, please let me know.
iterator = tf.data.Iterator.from_structure(tf.compat.v1.data.get_output_types(valDataset), tf.compat.v1.data.get_output_types(valDataset))
You are using data.get_output_types
for both arguments. Perhaps that is the issue.
HI @amanbasu Thank you for your previous replies, the training is done but for the testing i did not get any results, however my goals to get segmented images and calculate the Dice coefficient. sorry if I bothered with my questions but i new in this domain. Looking forward hearing from you
kindly,
Try to look into the predict.py file. On line 265, the predictions are returned as a list. You need to save the results here.
hi I had the following error "" sum(train_loss)/len(train_loss), sum(val_loss)/len(val_loss), time.time()-start_time)) "" even i put 40 samples for train-data and 10 samples for val-data, I think that len(train_loss) and len(val_loss) have zero value. Please let me know if you have any idea.