HRNet / HRNet-Semantic-Segmentation

The OCR approach is rephrased as Segmentation Transformer: https://arxiv.org/abs/1909.11065. This is an official implementation of semantic segmentation for HRNet. https://arxiv.org/abs/1908.07919
Other
3.13k stars 686 forks source link

AttibuteError #62

Open jwangjie opened 5 years ago

jwangjie commented 5 years ago

Hello,

When ruing the command

python tools/test.py --cfg experiments/cityscapes/seg_hrnet_w48_train_512x1024_sgd_lr1e-2_wd5e-4_bs_12_epoch484.yaml \
                     TEST.MODEL_FILE hrnet_w48_cityscapes_cls19_1024x2048_trainset.pth \
                     TEST.SCALE_LIST 0.5,0.75,1.0,1.25,1.5,1.75 \
                     TEST.FLIP_TEST True

I got the following AttibuteError:

...
...
=> loading stage4.0.fuse_layers.1.2.1.running_var from pretrained model
=> loading stage3.0.fuse_layers.2.0.1.0.weight from pretrained model
=> loading stage4.1.fuse_layers.1.2.1.running_var from pretrained model
=> loading stage3.1.branches.2.1.bn2.running_mean from pretrained model
=> loading stage3.1.branches.2.0.bn2.running_var from pretrained model
=> loading stage3.1.branches.1.0.bn2.running_mean from pretrained model
=> loading transition1.0.0.weight from pretrained model
=> loading stage3.1.branches.1.2.conv1.weight from pretrained model
=> loading stage4.1.branches.0.1.bn2.running_var from pretrained model
=> loading stage4.2.branches.1.0.bn1.bias from pretrained model
=> loading stage4.1.branches.3.2.bn1.running_var from pretrained model
=> loading stage3.1.branches.1.0.bn2.running_var from pretrained model
  0%|                                                                                                                                                           | 0/500 [00:00<?, ?it/s]Traceback (most recent call last):
  File "tools/test.py", line 142, in <module>
    main()
  File "tools/test.py", line 122, in main
    model)
  File "/home/aml/HRNet-Semantic-Segmentation/tools/../lib/core/function.py", line 145, in testval
    for index, batch in enumerate(tqdm(testloader)):
  File "/usr/local/lib/python3.5/dist-packages/tqdm/_tqdm.py", line 979, in __iter__
    for obj in iterable:
  File "/usr/local/lib/python3.5/dist-packages/torch/utils/data/dataloader.py", line 819, in __next__
    return self._process_data(data)
  File "/usr/local/lib/python3.5/dist-packages/torch/utils/data/dataloader.py", line 846, in _process_data
    data.reraise()
  File "/usr/local/lib/python3.5/dist-packages/torch/_utils.py", line 385, in reraise
    raise self.exc_type(msg)
AttributeError: Caught AttributeError in DataLoader worker process 0.
Original Traceback (most recent call last):
  File "/usr/local/lib/python3.5/dist-packages/torch/utils/data/_utils/worker.py", line 178, in _worker_loop
    data = fetcher.fetch(index)
  File "/usr/local/lib/python3.5/dist-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch
    data = [self.dataset[idx] for idx in possibly_batched_index]
  File "/usr/local/lib/python3.5/dist-packages/torch/utils/data/_utils/fetch.py", line 44, in <listcomp>
    data = [self.dataset[idx] for idx in possibly_batched_index]
  File "/home/aml/HRNet-Semantic-Segmentation/tools/../lib/datasets/cityscapes.py", line 107, in __getitem__
    size = image.shape
AttributeError: 'NoneType' object has no attribute 'shape'

I wonder what the problem is? Thank you.

Regards,

sunke123 commented 4 years ago

Please check the directory of data.

jwangjie commented 4 years ago

@sunke123 Thank you for your reply. I wonder can I save the dataset in an external hard drive? The storage space of my local SSD is limited.

sunskyhsh commented 4 years ago

@sunke123 Thank you for your reply. I wonder can I save the dataset in an external hard drive? The storage space of my local SSD is limited.

You need to modify the DATASET attribute in the yaml file and the __getitem__ function in the lib/datasets/cityscapes.py file. Below is my modification.

# the yaml file
DATASET:
  DATASET: cityscapes
  ROOT: '/media/sunsky/HDD/DATASETS/cityScape/'
  TEST_SET: 'data/list/cityscapes/val.lst'
  TRAIN_SET: 'list/cityscapes/train.lst'
  NUM_CLASSES: 19
# lib/datasets/cityscapes.py
    def __getitem__(self, index):
        item = self.files[index]
        name = item["name"]
        image = cv2.imread(os.path.join(self.root,item["img"]),
                           cv2.IMREAD_COLOR)
        size = image.shape

        if 'test' in self.list_path:
            image = self.input_transform(image)
            image = image.transpose((2, 0, 1))

            return image.copy(), np.array(size), name

        label = cv2.imread(os.path.join(self.root,item["label"]),
                           cv2.IMREAD_GRAYSCALE)
        label = self.convert_label(label)

        image, label = self.gen_sample(image, label, 
                                self.multi_scale, self.flip, 
                                self.center_crop_test)

        return image.copy(), label.copy(), np.array(size), name
Alexyitx commented 4 years ago

@sunke123 Thank you for your reply. I wonder can I save the dataset in an external hard drive? The storage space of my local SSD is limited. HI,have you solved the problem , now i met the same problem as you , and i don't known how to solve it . #68

linlinge commented 3 years ago

according to my test, this problem is arosed by wrong image path. When we use cityscapes to train the model, the default image path are not consistency with the path in code. For an example, /hrnet/HRNet-Semantic-Segmentation/data/cityscapes/leftImg8bit/train/stuttgart/ (path in code) /hrnet/HRNet-Semantic-Segmentation/data/cityscapes/leftImg8Bit/train/stuttgart/ (default path in cityscapes)

so, I change "leftImg8Bit" to "leftImg8bit", then it works!

liwei1101 commented 2 years ago

...I has debuged two weeks! I did not find this word error,because i copy this datasets from other.Thanks to you solution.Thanks♪(・ω・)ノ