Closed cherryMo closed 5 years ago
Hi. No pre-processing required. I would guess that the problem is that you used the weights trained without compression included, is that the case? The image probably experiences compression artifacts, which requires the weights trained with such artifacts included (the correct weights can be downloaded here).
Thank you for your reply, but the problem has not been resolved. Processing my own image, the effect is not very good (the use of weights is correct)
Are you sure about the weights? I tried using your image above, and get the exact same result when using the weights that do not include compression (hdrcnn_params.npz). Using the weights that are trained with compression (hdrcnn_params_compr.npz), the result looks like this:
The command for running the reconstruction:
$ python hdrcnn_predict.py --params hdrcnn_params_compr.npz --im_dir input.jpg --width 1280 --height 852
Thank you for your reply. could you explain the difference between input_image, _in.png _out.png and out_hdr in detail?
The input image is your input low dynamic range image for which you want to reconstruct HDR information. For the output images, please see the explanation in a previous issue comment.
Does the images in the folder data need to be preprocessed?
Why is it not good to process images downloaded internet? the image from internet
_in.png
_out.png
![000001_out](https://user-images.githubusercontent.com/13359274/42561609-2183ba82-852c-11e8-8918-37ac0d917d95.png)