pkuCactus / BDCN

The code for the CVPR2019 paper Bi-Directional Cascade Network for Perceptual Edge Detection
MIT License
341 stars 71 forks source link

multi-scale input #39

Open ForawardStar opened 3 years ago

ForawardStar commented 3 years ago

How to get the multi-scale input?

DREAMXFAR commented 3 years ago

@ForawardStar you can refer to test_ms.py test_img = Data(test_root, test_lst, mean_bgr=mean_bgr, scale=[0.5, 1, 1.5])

ForawardStar commented 3 years ago

@DREAMXFAR Thanks for your answer!! It solved my problem. I notice that BDCN are trained on augmentated BSDS500 and PASCAL VOC Context datasets, Could you please provide your training data to us for helping us to reproduce your result (ODS-F: 0.820). By the way, the paper says that the threshold used for loss computation is set as 0.3 for BSDS500,,but the 'yita' value in cfg.py is 0.5, is this a mistake?Looking forward to your reply.