Closed zjtggssg closed 1 year ago
Hi Does your script stop due to this StopIteration error? If not, you are fine. If yes, you need to adapt your data loader not to stop once it loaded all images of the training set.
Thank you for your reply.
I was debugging this morning and found that the problem seems to be in {AttributeError}'_SingleProcessDataLoaderIter' object has no attribute 'seqtypes_set'
here, is this because there is an error in data loading?
yes, you need to adapt the file bratsloader.py to your dataset, or store the data as indicated in the folder /data.
But what I am using is . /data/training that you provided.
could you give me the detailed error message? where does this error occur?
`Traceback (most recent call last): File "/home/ws/Desktop/Diffusion-based-Segmentation-main/guided_diffusion/train_util.py", line 179, in run_loop batch, cond = next(data_iter) File "/home/ws/.conda/envs/unet/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 517, in next data = self._next_data() File "/home/ws/.conda/envs/unet/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 556, in _next_data index = self._next_index() # may raise StopIteration File "/home/ws/.conda/envs/unet/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 508, in _next_index return next(self._sampler_iter) # may raise StopIteration StopIteration
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/ws/Desktop/Diffusion-based-Segmentation-main/scripts/segmentation_train.py", line 90, in This is the running error message。 and then I set the breakpoint
data = self._next_data(),It came up with this error
{AttributeError}'_SingleProcessDataLoaderIter' object has no attribute 'seqtypes_set'`
Can you print the length of your BRATS dataset in the script _segmentationtrain.py?
Hi, I am printing out a length of 0, which means that I am not getting the file in the dataset.
ok, did you set all paths to the dataset correctly? The error message happens because your data loader cannot find any files in the dataset.
I tried to set the absolute path, but I can't read the data I want.
I don't seem to be able to get into the loop to read the file` for root, dirs, files in os.walk(self.directory):
# print("root: ",root)
print(1)
if not dirs:
files.sort()
datapoint = dict()
# extract all files as channels
for f in files:
seqtype = f.split('_')[3]
datapoint[seqtype] = os.path.join(root, f)
assert set(datapoint.keys()) == self.seqtypes_set, \
f'datapoint is incomplete, keys are {datapoint.keys()}'
self.database.append(datapoint)`
Thanks for your reply, I solved it
I'm running the source code and I have this problem, there is no good solution, I hope I can answer my doubts, thanks.
Traceback (most recent call last): File "/home/ws/Desktop/Diffusion-based-Segmentation-main/guided_diffusion/train_util.py", line 179, in run_loop batch, cond = next(data_iter) File "/home/ws/.conda/envs/unet/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 517, in __next__ data = self._next_data() File "/home/ws/.conda/envs/unet/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 556, in _next_data index = self._next_index() # may raise StopIteration File "/home/ws/.conda/envs/unet/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 508, in _next_index return next(self._sampler_iter) # may raise StopIteration StopIteration