VITA-Group / FasterSeg

[ICLR 2020] "FasterSeg: Searching for Faster Real-time Semantic Segmentation" by Wuyang Chen, Xinyu Gong, Xianming Liu, Qian Zhang, Yuan Li, Zhangyang Wang
MIT License
524 stars 107 forks source link

ValueError: not enough values to unpack (expected 2, got 0) #37

Closed kukby closed 4 years ago

kukby commented 4 years ago

Hello, Thanks for your job!!When i use your code i meet a question how can i solve it?

05 21:04:37 params = 2.579183MB, FLOPs = 71.419396GB architect initialized! using downsampling: 2 Found 1487 images using downsampling: 2 Found 1488 images using downsampling: 2 Found 500 images 0%| | 0/20 [00:00<?, ?it/s]05 21:04:38 True 05 21:04:38 search-pretrain-256x512_F12.L16_batch3-20200605-210423 05 21:04:38 lr: 0.02 05 21:04:38 update arch: False [00:00<?,?it/s]ain...]: 0%| | 0/20 [00:00<?, ?it/s] [Epoch 1/20][train...]: 0%| | 0/20 [00:00<?, ?it/s] Traceback (most recent call last): File "train_search.py", line 303, in main(pretrain=config.pretrain) File "train_search.py", line 134, in main train(pretrain, train_loader_model, train_loader_arch, model, architect, ohem_criterion, optimizer, lr_policy, logger, epoch, update_arch=update_arch) File "train_search.py", line 223, in train minibatch = dataloader_model.next() File "/home/kukby/.local/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 345, in next data = self._next_data() File "/home/kukby/.local/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 856, in _next_data return self._process_data(data) File "/home/kukby/.local/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 881, in _process_data data.reraise() File "/home/kukby/.local/lib/python3.6/site-packages/torch/_utils.py", line 394, in reraise raise self.exc_type(msg) ValueError: Caught ValueError in DataLoader worker process 0. Original Traceback (most recent call last): File "/home/kukby/.local/lib/python3.6/site-packages/torch/utils/data/_utils/worker.py", line 178, in _worker_loop data = fetcher.fetch(index) File "/home/kukby/.local/lib/python3.6/site-packages/torch/utils/data/_utils/fetch.py", line 44, in fetch data = [self.dataset[idx] for idx in possibly_batched_index] File "/home/kukby/.local/lib/python3.6/site-packages/torch/utils/data/_utils/fetch.py", line 44, in data = [self.dataset[idx] for idx in possibly_batched_index] File "/home/kukby/FasterSeg/tools/datasets/BaseDataset.py", line 42, in getitem img, gt = self._fetch_data(img_path, gt_path) File "/home/kukby/FasterSeg/tools/datasets/BaseDataset.py", line 67, in _fetch_data gt = self._open_image(gt_path, cv2.IMREAD_GRAYSCALE, dtype=dtype, down_sampling=self._down_sampling) File "/home/kukby/FasterSeg/tools/datasets/BaseDataset.py", line 130, in _open_image H, W = img.shape[:2] ValueError: not enough values to unpack (expected 2, got 0)

kobe41999 commented 4 years ago

Hey, I have met the same problem. Did you solve it?

chenwydj commented 4 years ago

Hi @kobe41999 and @kukby!

Thank you for your interest in our work!

Please check if your dataset is in proper format. 1) if you can load your RGB image properly (e.g. manually load the images and see if H, W = img.shape[:2] works well) 2) if you have prepared the dataset mentioned here.