zhengchen1999 / DAT

PyTorch code for our ICCV 2023 paper "Dual Aggregation Transformer for Image Super-Resolution"
Apache License 2.0
350 stars 27 forks source link

how to train own datasets #15

Open xingjunhong opened 8 months ago

xingjunhong commented 8 months ago

how to train own datasets

zhengchen1999 commented 8 months ago

You can modify the training configs in .yml files, like train_DAT_light_x4.yml. Replace your own data (LR and HR) path in dataroot_gt and dataroot_lq.

    dataroot_gt: datasets/DF2K/HR
    dataroot_lq: datasets/DF2K/LR_bicubic/X4

PS: Our code is based on BasicSR. It provide detail commands tutorials in doc. Maybe you refer to it.

xingjunhong commented 8 months ago

How much data does it take to train a demo model

zhengchen1999 commented 8 months ago

Our training dataset is DK2K, which contains 3550 images. The training setting of our models: batch size = 32, patch size = 64 $\times$ 64, training iteration = 500,000.

xingjunhong commented 8 months ago

Are we training with high-definition images? Or use low resolution images for training?

zhengchen1999 commented 8 months ago

Training needs high-resolution and low-resolution image pairs. This is a supervised task, where high-resolution images are ground truth.

xingjunhong commented 8 months ago

Isn't it using low resolution images for training and then performing loss calculations with high-resolution images?

zhengchen1999 commented 8 months ago

Yes.

xingjunhong commented 8 months ago

Is the original image of the dataset we collected high-resolution? Then use the script to turn it into a low resolution image and train it with a low resolution image?

zhengchen1999 commented 8 months ago

Yes, in our approach, we use bicubic downsampling for high-resolution images (x2, x3, x4). Of course, you can also utilize other methods, such as the degradation model in Real-ESRGAN, which is suitable for real-world SR.