JuliaWolleb / Diffusion-based-Segmentation

This is the official Pytorch implementation of the paper "Diffusion Models for Implicit Image Segmentation Ensembles".
MIT License
272 stars 35 forks source link

Hello, why is this question? #5

Closed zjtggssg closed 1 year ago

zjtggssg commented 1 year ago

I'm running the source code and I have this problem, there is no good solution, I hope I can answer my doubts, thanks. Traceback (most recent call last): File "/home/ws/Desktop/Diffusion-based-Segmentation-main/guided_diffusion/train_util.py", line 179, in run_loop batch, cond = next(data_iter) File "/home/ws/.conda/envs/unet/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 517, in __next__ data = self._next_data() File "/home/ws/.conda/envs/unet/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 556, in _next_data index = self._next_index() # may raise StopIteration File "/home/ws/.conda/envs/unet/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 508, in _next_index return next(self._sampler_iter) # may raise StopIteration StopIteration

JuliaWolleb commented 1 year ago

Hi Does your script stop due to this StopIteration error? If not, you are fine. If yes, you need to adapt your data loader not to stop once it loaded all images of the training set.

zjtggssg commented 1 year ago

Thank you for your reply. I was debugging this morning and found that the problem seems to be in {AttributeError}'_SingleProcessDataLoaderIter' object has no attribute 'seqtypes_set' here, is this because there is an error in data loading?

JuliaWolleb commented 1 year ago

yes, you need to adapt the file bratsloader.py to your dataset, or store the data as indicated in the folder /data.

zjtggssg commented 1 year ago

But what I am using is . /data/training that you provided.

JuliaWolleb commented 1 year ago

could you give me the detailed error message? where does this error occur?

zjtggssg commented 1 year ago

`Traceback (most recent call last): File "/home/ws/Desktop/Diffusion-based-Segmentation-main/guided_diffusion/train_util.py", line 179, in run_loop batch, cond = next(data_iter) File "/home/ws/.conda/envs/unet/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 517, in next data = self._next_data() File "/home/ws/.conda/envs/unet/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 556, in _next_data index = self._next_index() # may raise StopIteration File "/home/ws/.conda/envs/unet/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 508, in _next_index return next(self._sampler_iter) # may raise StopIteration StopIteration

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "/home/ws/Desktop/Diffusion-based-Segmentation-main/scripts/segmentation_train.py", line 90, in main() File "/home/ws/Desktop/Diffusion-based-Segmentation-main/scripts/segmentation_train.py", line 63, in main lr_anneal_steps=args.lr_anneal_steps, File "/home/ws/Desktop/Diffusion-based-Segmentation-main/guided_diffusion/train_util.py", line 184, in run_loop batch, cond = next(data_iter) File "/home/ws/.conda/envs/unet/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 517, in next data = self._next_data() File "/home/ws/.conda/envs/unet/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 556, in _next_data index = self._next_index() # may raise StopIteration File "/home/ws/.conda/envs/unet/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 508, in _next_index return next(self._sampler_iter) # may raise StopIteration StopIterationThis is the running error message。 and then I set the breakpointdata = self._next_data(),It came up with this error{AttributeError}'_SingleProcessDataLoaderIter' object has no attribute 'seqtypes_set'`

JuliaWolleb commented 1 year ago

Can you print the length of your BRATS dataset in the script _segmentationtrain.py?

zjtggssg commented 1 year ago

Hi, I am printing out a length of 0, which means that I am not getting the file in the dataset.

JuliaWolleb commented 1 year ago

ok, did you set all paths to the dataset correctly? The error message happens because your data loader cannot find any files in the dataset.

zjtggssg commented 1 year ago

I tried to set the absolute path, but I can't read the data I want.

zjtggssg commented 1 year ago

I don't seem to be able to get into the loop to read the file` for root, dirs, files in os.walk(self.directory):

if there are no subdirs, we have data

        # print("root: ",root)
        print(1)
        if not dirs:
            files.sort()
            datapoint = dict()
            # extract all files as channels
            for f in files:
                seqtype = f.split('_')[3]
                datapoint[seqtype] = os.path.join(root, f)
            assert set(datapoint.keys()) == self.seqtypes_set, \
                f'datapoint is incomplete, keys are {datapoint.keys()}'
            self.database.append(datapoint)`
zjtggssg commented 1 year ago

Thanks for your reply, I solved it