Open nil0330 opened 1 year ago
I had the same problem. I think 'sr' means super_resolution, and 'hr' means high_resolution in this program. However, I'm wondering why the folder named 'water_val_16_256' has this two subfolders.
sr和hr有什么区别呢
Image-Super-Resolution-via-Iterative-Refinement I have found the Initial code reference.
For the custom dataset preparation, you only need to split the dataset to train, val and test. For example:
And then, you should modify the dict 'datasets' in config/underwater.json line 16-38. For example,
"dataset": { "train": { "dataroot" : "datasets/train", }} After that, the code works well.
Yes, and this issue is still not been improved. Moreover, data examples provided by author in the repository are extremely confusing. It appears that "hr_256" is the image processed by "sr_18_256", yet both of these folders are indispensable during inference. However, it seems that only "sr_18_256" is processed in the end. This is truly perplexing!
After briefly reviewing the author's paper and code, I would like to propose the following speculations here. If the author does not reply, then I assume my understanding is correct by default. That is to say, if the author does not explicitly point out where I am wrong, I will not know how to correct... I'm so sorry......
Well... My speculations are as follows:
"prepare_data.py" can output data in the format of the author's test files, such as cropped and resized 128128 images, as well as a set of low-resolution 128128 images upsampled again from the previously obtained 1616 images using the above methods. However, after checking the corresponding paper, it seems that the input data should only require a square image (128128, 256256, 512*512, etc.).
It seems that the only images processed by the model are "sr_18_256". Therefore, it is not quite clear why the author chose to use this .py file to process the data. It actually makes the processed images blurrier, and it seems that the hr images are not being utilized.
Therefore, if you want to use your own data, I speculate that the steps should be as follows:
Run "prepare_data.py" using, for example, use this command: python your_script.py --path /your/image/directory --out /output/directory --size 64,512 --n_worker 1 --resample bicubic
Run "infer.py". Then find your results in the "experiment_model" directory.
However, this data pre-process makes results so blur. Therefore, I just use the resized images. And resize then back after processing. I think this way can minimize the loss of information.
It would be nice if the author is open to correcting my perspective.
For the custom dataset preparation, you only need to split the dataset to train, val and test. For example:
* dataset -- train --- hr_256 (where put the input images) --- sr_16_256 (where put the annotated images) -- val -- test
And then, you should modify the dict 'datasets' in config/underwater.json line 16-38. For example,
"dataset": { "train": { "dataroot" : "datasets/train", }} After that, the code works well.
Hi, I have some queries,
Your help is needed to train this model.
Yes, and this issue is still not been improved. Moreover, data examples provided by author in the repository are extremely confusing. It appears that "hr_256" is the image processed by "sr_18_256", yet both of these folders are indispensable during inference. However, it seems that only "sr_18_256" is processed in the end. This is truly perplexing!
After briefly reviewing the author's paper and code, I would like to propose the following speculations here. If the author does not reply, then I assume my understanding is correct by default. That is to say, if the author does not explicitly point out where I am wrong, I will not know how to correct... I'm so sorry......
Well... My speculations are as follows:
- "prepare_data.py" can output data in the format of the author's test files, such as cropped and resized 128128 images, as well as a set of low-resolution 128128 images upsampled again from the previously obtained 1616 images using the above methods. However, after checking the corresponding paper, it seems that the input data should only require a square image (128_128, 256_256, 512*512, etc.).
- It seems that the only images processed by the model are "sr_18_256". Therefore, it is not quite clear why the author chose to use this .py file to process the data. It actually makes the processed images blurrier, and it seems that the hr images are not being utilized.
Therefore, if you want to use your own data, I speculate that the steps should be as follows:
- Run "prepare_data.py" using, for example, use this command: python your_script.py --path /your/image/directory --out /output/directory --size 64,512 --n_worker 1 --resample bicubic
- Run "infer.py". Then find your results in the "experiment_model" directory.
However, this data pre-process makes results so blur. Therefore, I just use the resized images. And resize then back after processing. I think this way can minimize the loss of information.
It would be nice if the author is open to correcting my perspective.
Thank you for your assistance. I have successfully loaded the weights and dataset and run the program, but there seems to be no enhancement in the results. Is this the case for you as well?
I would like to replace the dataset on your model with my own dataset to see the generated results, but I'm not sure what 'hr_256' and 'sr_16_256' mean, and how to prepare my own dataset in this format.