Closed OrianHindi closed 3 years ago
You need to first run test_rgb.py to obtain the rendered RGB, which is assumed to be saved in out_path
. Then you can set the out_path
in test_raw.py to get predicted RAW.
Please note simulating RGB by our framework is neccessary, since RAW reconstruction relies on our invertible ISP, instead of the in-camera ISP, which is a lossy one.
@yzxing87 Hi, sorry for bothering with a closed issue. I have the similar use case with this topic, I'd like to get raw images from a certain camera and use those raw images to train another deep learning model (e.g., denoise in raw domain) and avoid massive data collection (or collect some for fine tune and evaluation). Since you have to use the rendered RGB from RAW and it is different from lossy one, is this means we can't just use your research for inverse-isp from arbitrary jpg images right? Thank you for any advice you can provide.
Hi, for your task, I think it is worth a try to reconstruct raw from camera rendered RGB after training an invertible ISP. Although the performance of the reconstruction may drop, it can still be at a reasonable accuracy since our rendered RGB is quite similar to the camera RGB.
Thank you for advice! I'll try it out :)
hi, we tried to run the
test.sh
script ontest_raw.py
, at the lineinput_RGBs = sorted(glob(out_path+"pred*jpg"))
input_RGBs is an empty list, we looked on out_path and wanted to know if we need to put the images in this folder or this happened in the data_process. we ran the test on your pretrained weights.We are trying to convert RGB image to RAW with your model. Can you please give us some guidelines or tips to do so ?
Thanks