Closed Alive59 closed 1 year ago
Thanks for the great code. I'm just wondering how to make the outputs the same dimension as the input images. At this stage the model would resize all inputs into 256*256.
python main_test.py --dataset SRD --datasetpath [path_to_SRD dataset] --use_original_name True --im_suf_A .jpg
Thanks for the great code. I'm just wondering how to make the outputs the same dimension as the input images. At this stage the model would resize all inputs into 256*256.
python main_test.py --dataset SRD --datasetpath [path_to_SRD dataset] --use_original_name True --im_suf_A .jpg
Thanks for the reply, but it seems the model still works the same... Maybe I haven't described the question well but what I meant was that how to keep the output in the same dimension, for example 640 480, but not all in 256 256.
Thanks for the great code. I'm just wondering how to make the outputs the same dimension as the input images. At this stage the model would resize all inputs into 256*256.
python main_test.py --dataset SRD --datasetpath [path_to_SRD dataset] --use_original_name True --im_suf_A .jpg
Thanks for the reply, but it seems the model still works the same... Maybe I haven't described the question well but what I meant was that how to keep the output in the same dimension, for example 640 480, but not all in 256 256.
parser.add_argument('--img_h', type=int, default=480, help='The org size of image') parser.add_argument('--img_w', type=int, default=640, help='The org size of image') in [main_test.py]
self.img_size = args.img_size self.img_h = args.img_h self.img_w = args.img_w in [DCShadowNet_test.py]
I just found out that making the original images into smaller tiles leads to better results, so that's no longer a problem... Anyway, thanks for the help!
Thanks for the great code. I'm just wondering how to make the outputs the same dimension as the input images. At this stage the model would resize all inputs into 256*256.