Open Winnie202 opened 1 year ago
In training stage1 and stage2, ContrasValDataset why does use low resolution images to generate ref images by resizing instead of reading ref images directly
img_path = self . paths [index ]['in_path' ] img_bytes = self . file_client . get ( img_path , 'in' ) img_in = mmcv . imfrombytes ( img_bytes ). astype ( np . float32 ) / 255. gt_h, gt_w = self . opt ['gt_size' ], self . opt ['gt_size' ] # in case that some images may not have the same shape as gt_size img_in = mmcv . imresize ( img_in , ( gt_w, gt_h), interpolation ='bicubic' ) # augmentation: flip, rotation img_in = augment ([ img_in ], self . opt ['use_flip' ], self . opt ['use_rot' ]) # image pair generation img_in_transformed , H , H_inverse = image_pair_generation ( img_in , ( 0, 10), 160) return { 'img_in': img_in , 'img_in_up': img_in_up , 'img_ref' : img_in_transformed , 'img_ref_up' : img_in_transformed_up , 'transformed_coordinate' : transformed_coordinate }
In training stage1 and stage2, ContrasValDataset why does use low resolution images to generate ref images by resizing instead of reading ref images directly