Open Morayabb opened 8 months ago
export CUDA_VISIBLE_DEVICES=0 OrderedDict([('optimizer', 'AdamW'), ('name', 'Train_Dataset'), ('mode', 'LQGT'), ('dataroot_GT', 'D:/desk/EDiffSR-main/traindata/AID'), ('use_shuffle', True), ('n_workers', 4), ('batch_size', 2), ('GT_size', 256), ('LR_size', 64), ('use_flip', True), ('use_rot', True), ('color', 'RGB')]) OrderedDict([('name', 'Val_Dataset'), ('mode', 'LQGT'), ('dataroot_GT', 'D:/desk/EDiffSR-main/testdata/WHU-RS19/GT')]) Disabled distributed training. Path already exists. Rename it to [D:\experiments\sisr\ediffsr_archived_240328-200622] cosine schedule D:\Anaconda\envs\CDCR\lib\site-packages\torch\nn\functional.py:3458: UserWarning: Default upsampling behavior when mode=bicubic is changed to align_corners=False since 0.4.0. Please specify align_corners=True if the old behavior is desired. See the documentation of nn.Upsample for details. "See the documentation of nn.Upsample for details.".format(mode)
Thank you very much for your excellent work. When I use the new training set and test set, I will directly stay in the above process and not execute down. Do you know how to solve it?
if you set 'mode' to 'LQGT', you need to give the LQ path, otherwise, set 'mode' to 'GT' and 'dataroot_LQ' to '~'.
Thank you very much for your reply. After changing 'mode' to 'LQGT' and adding 'dataroot-LQ' again, the above problem still occurs. Could you guide me on further solutions?
export CUDA_VISIBLE_DEVICES=0 OrderedDict([('optimizer', 'AdamW'), ('name', 'Train_Dataset'), ('mode', 'LQGT'), ('dataroot_GT', 'D:/desk/EDiffSR-main/traindata/AID'), ('use_shuffle', True), ('n_workers', 4), ('batch_size', 2), ('GT_size', 256), ('LR_size', 64), ('use_flip', True), ('use_rot', True), ('color', 'RGB')]) OrderedDict([('name', 'Val_Dataset'), ('mode', 'LQGT'), ('dataroot_GT', 'D:/desk/EDiffSR-main/testdata/WHU-RS19/GT')]) Disabled distributed training. Path already exists. Rename it to [D:\experiments\sisr\ediffsr_archived_240328-200622] cosine schedule D:\Anaconda\envs\CDCR\lib\site-packages\torch\nn\functional.py:3458: UserWarning: Default upsampling behavior when mode=bicubic is changed to align_corners=False since 0.4.0. Please specify align_corners=True if the old behavior is desired. See the documentation of nn.Upsample for details. "See the documentation of nn.Upsample for details.".format(mode)
Thank you very much for your excellent work. When I use the new training set and test set, I will directly stay in the above process and not execute down. Do you know how to solve it?