pkuCactus / BDCN

The code for the CVPR2019 paper Bi-Directional Cascade Network for Perceptual Edge Detection
MIT License
341 stars 71 forks source link

test error #15

Closed justinner closed 5 years ago

justinner commented 5 years ago

Hi, @pkuCactus ,I just want to test on the BSD500 dataset,so when I installed the requirement I modified the path of test dataset and 'voc.txt',I just change it to 'test.lst',but It occured error,like this: Exception: KeyError:Traceback (most recent call last): File "/home/jia/anaconda3/envs/bdcn/lib/python2.7/site-packages/torch/utils/data/_utils/worker.py", line 99, in _worker_loop samples = collate_fn([dataset[i] for i in batch_indices]) File "/home/jia/pycode/BDCN-master/datasets/dataset.py", line 52, in getitem img = load_image_with_cache(img_file, self.cache) File "/home/jia/pycode/BDCN-master/datasets/dataset.py", line 18, in load_image_with_cache return Image.open(StringIO(cache[path])) KeyError: '/home/jia/pycode/BDCN-master/data/test/100007.jpg'

so could you give me some advice,hope your reply,Thanks very much.

xavysp commented 5 years ago

Hi, @pkuCactus ,I just want to test on the BSD500 dataset,so when I installed the requirement I modified the path of test dataset and 'voc.txt',I just change it to 'test.lst',but It occured error,like this: Exception: KeyError:Traceback (most recent call last): File "/home/jia/anaconda3/envs/bdcn/lib/python2.7/site-packages/torch/utils/data/_utils/worker.py", line 99, in _worker_loop samples = collate_fn([dataset[i] for i in batch_indices]) File "/home/jia/pycode/BDCN-master/datasets/dataset.py", line 52, in getitem img = load_image_with_cache(img_file, self.cache) File "/home/jia/pycode/BDCN-master/datasets/dataset.py", line 18, in load_image_with_cache return Image.open(StringIO(cache[path])) KeyError: '/home/jia/pycode/BDCN-master/data/test/100007.jpg'

so could you give me some advice,hope your reply,Thanks very much.

Hi @justinner maybe you can use OpenCV instead of Image (load_image_with_cache) like:

    def __getitem__(self, index):
        data_file = self.files[index]
        # load Image
        img_file = self.root + data_file[0]
        # print(img_file)
        if not os.path.exists(img_file):
            img_file = img_file.replace('jpg', 'png')
        # img = Image.open(img_file)
        # img = load_image_with_cache(img_file, self.cache)
        img = cv.imread(img_file)
        # load gt image
        gt_file = self.root + data_file[1]
        # gt = Image.open(gt_file)
        gt = cv.imread(gt_file,cv.IMREAD_GRAYSCALE)
        if self.is_train:
            img = cv.resize(img,dsize=(400,400))
            gt = cv.resize(gt,dsize=(400,400))
        return self.transform(img, gt)

    def transform(self, img, gt):
        gt = np.array(gt, dtype=np.float32)
        if len(gt.shape) == 3:
            gt = gt[:, :, 0]
        gt /= 255.
        if self.yita is not None:
            gt[gt >= self.yita] = 1
        gt = torch.from_numpy(np.array([gt])).float()
        img = np.array(img, dtype=np.float32)
        if self.rgb:
            img = img[:, :, ::-1] # RGB->BGR
        img -= self.mean_bgr
        data = []
        if self.scale is not None and self.is_train:
            for scl in self.scale:
                img_scale = cv2.resize(img, None, fx=scl, fy=scl, interpolation=cv2.INTER_LINEAR)
                data.append(torch.from_numpy(img_scale.transpose((2,0,1))).float())
            return data, gt
        img = img.transpose((2, 0, 1))
        img = torch.from_numpy(img.copy()).float()
        if self.crop_size:
            _, h, w = gt.size()
            assert(self.crop_size < h and self.crop_size < w)
            i = random.randint(0, h - self.crop_size)
            j = random.randint(0, w - self.crop_size)
            img = img[:, i:i+self.crop_size, j:j+self.crop_size]
            gt = gt[:, i:i+self.crop_size, j:j+self.crop_size]
        return img, gt
justinner commented 5 years ago

Hi, @xavysp ,Thanks for your help,I am sorry to reply you now,I will try immediatelly.