fnzhan / EMLight

[AAAI 2021] EMLight: Lighting Estimation via Spherical Distribution Approximation, [TIP] GMLight: Lighting Estimation via Geometric Distribution Approximation
168 stars 35 forks source link

requirements to run the code #4

Open weberhen opened 3 years ago

weberhen commented 3 years ago

Hi again!

Could you please share your running environment so I can try to run the code with little to no changes in it please? I'm facing some silly problems like conversion from numpy to torch, which I suspect is because you use another pytorch version than mine, otherwise you would have the same problem as me. One example of error I'm getting:

$ Illumination-Estimation/RegressionNetwork/train.py

Thanks!

goddice commented 3 years ago

How did you preprocess the raw Laval HDR dataset to get the training runnable? Thanks

weberhen commented 3 years ago

Once you have access to the depth and HDR data from Laval HDR dataset, you can use the function depth_for_anchors_calc in my forked code: https://github.com/weberhen/Illumination-Estimation/blob/main/RegressionNetwork/data.py

Please let me know if you find some bug :)

goddice commented 3 years ago

Once you have access to the depth and HDR data from Laval HDR dataset, you can use the function depth_for_anchors_calc in my forked code: https://github.com/weberhen/Illumination-Estimation/blob/main/RegressionNetwork/data.py

Please let me know if you find some bug :)

Thank you for your reply. Yes, I have access to the depth and HDR data. So where is the script the call "depth_for_anchors_calc" function. Or could you kindly teach me which scripts need to run in order to train the model from scratch? Suppose now I only have the raw depth and HDR data. Thanks!

weberhen commented 3 years ago

the code I used to preprocess the dataset is this old commit, I ended up erasing it by mistake but you can use it here: https://github.com/weberhen/Illumination-Estimation/blob/5960738cdd7184c3cd897a47840db6b647d013ac/RegressionNetwork/representation/distribution_representation.py

this code creates the pkl files necessary during training. It will take the GT panorama and the depth and will create a pickle file with those infos.

Once you have the pkl files, all you need is to run train.py from the RegressionNetwork folder using my forked code: https://github.com/weberhen/Illumination-Estimation

But to be honest I'm not sure its working, its on epoch 38 and it outputs always the same prediction since epoch 3.

LeoDarcy commented 3 years ago

the code I used to preprocess the dataset is this old commit, I ended up erasing it by mistake but you can use it here: https://github.com/weberhen/Illumination-Estimation/blob/5960738cdd7184c3cd897a47840db6b647d013ac/RegressionNetwork/representation/distribution_representation.py

this code creates the pkl files necessary during training. It will take the GT panorama and the depth and will create a pickle file with those infos.

Once you have the pkl files, all you need is to run train.py from the RegressionNetwork folder using my forked code: https://github.com/weberhen/Illumination-Estimation

But to be honest I'm not sure its working, its on epoch 38 and it outputs always the same prediction since epoch 3.

EMLight needs the image warping operation in Gardner 's work to warp the raw HDR and get input images. But I am not sure whether the codes about warping operation are released here. How do you sample images and warp panoramas? Thanks in advanced.

fnzhan commented 3 years ago

There is some trick for the training. I first overfit the model in a small dataset, then train it on the full dataset. Thanks to the intellectual property of the Laval dataset, I am not sure if trained model can be released.

weberhen commented 3 years ago

Hello @fnzhan ,

I work with prof Jean-François Lalonde, one of the creators of the dataset. You can share the trained model, the only thing is that it can be only used for research purposes :)

cyjouc commented 3 years ago

the code I used to preprocess the dataset is this old commit, I ended up erasing it by mistake but you can use it here: https://github.com/weberhen/Illumination-Estimation/blob/5960738cdd7184c3cd897a47840db6b647d013ac/RegressionNetwork/representation/distribution_representation.py

this code creates the pkl files necessary during training. It will take the GT panorama and the depth and will create a pickle file with those infos.

Once you have the pkl files, all you need is to run train.py from the RegressionNetwork folder using my forked code: https://github.com/weberhen/Illumination-Estimation

But to be honest I'm not sure its working, its on epoch 38 and it outputs always the same prediction since epoch 3.

Hi,Would you like share the code that generate the pkl files? or share the Dataset preprocessing code?Wish for you reply!

cyjouc commented 3 years ago

Once you have access to the depth and HDR data from Laval HDR dataset, you can use the function depth_for_anchors_calc in my forked code: https://github.com/weberhen/Illumination-Estimation/blob/main/RegressionNetwork/data.py Please let me know if you find some bug :)

Thank you for your reply. Yes, I have access to the depth and HDR data. So where is the script the call "depth_for_anchors_calc" function. Or could you kindly teach me which scripts need to run in order to train the model from scratch? Suppose now I only have the raw depth and HDR data. Thanks!

Hi,would you like share the process of the depth and HDR data from Laval HDR dataset?wish for your reply!

xjsxjs commented 2 years ago

the code I used to preprocess the dataset is this old commit, I ended up erasing it by mistake but you can use it here: https://github.com/weberhen/Illumination-Estimation/blob/5960738cdd7184c3cd897a47840db6b647d013ac/RegressionNetwork/representation/distribution_representation.py

this code creates the pkl files necessary during training. It will take the GT panorama and the depth and will create a pickle file with those infos.

Once you have the pkl files, all you need is to run train.py from the RegressionNetwork folder using my forked code: https://github.com/weberhen/Illumination-Estimation

But to be honest I'm not sure its working, its on epoch 38 and it outputs always the same prediction since epoch 3.

Hello sir, this connection may not be open, how to deal with the original Laval data set, if you can tell me, I will be very grateful!

weberhen commented 2 years ago

Hi!

You can try my fork: https://github.com/weberhen/Illumination-Estimation-1

jxl0131 commented 1 year ago

Hi!

You can try my fork: https://github.com/weberhen/Illumination-Estimation-1

hello!

I am trying to follow your code to crop Fov from Hdr, using 'gen_hdr_crops.py'. But i found that your code using some modules like 'envmap'、'exzer', which i am not familiar with, and wonder how to pip install them. After google, i guess these modules come from ' skylibs'(https://github.com/soravux/skylibs), but still get some error after 'pip install --upgrade skylibs'. So can you share me with your requirements.txt ?

Thanks!

weberhen commented 1 year ago

Hi @jxl0131 !

Its indeed skylibs. I just tested 'pip install --upgrade skylibs' and it worked. I suggest you ask the original developer if you cannot install since it will be the easier way to get gen_hdr_crops.py to work.

Good luck!

jxl0131 commented 1 year ago

Hi @jxl0131 !

Its indeed skylibs. I just tested 'pip install --upgrade skylibs' and it worked. I suggest you ask the original developer if you cannot install since it will be the easier way to get gen_hdr_crops.py to work.

Good luck!

thanks!

I am ok with my enviromment now. I am following your edited GenProjector code, and i found that in your 'Illumination-Estimation-1/GenProjector/data.py' file, function 'getitem', you try to get envmap_exr from '/crop' directory. I can't understand it, i think it should be another directory containing full panorama picture instead crops. wish for your reply!

weberhen commented 1 year ago

Hi!

I'm sorry about that, its called crop but inside were envmaps, I cant recall why I named it this way. But you can see that the code continues with the creation of the actual crop from that envmap, so just replace the folder 'crop' to match your dataset structure and it should work.

jxl0131 commented 1 year ago

Hi!

I'm sorry about that, its called crop but inside were envmaps, I cant recall why I named it this way. But you can see that the code continues with the creation of the actual crop from that envmap, so just replace the folder 'crop' to match your dataset structure and it should work.

haha, i can understand what you said! Thanks!

jxl0131 commented 1 year ago

Hi!

I'm sorry about that, its called crop but inside were envmaps, I cant recall why I named it this way. But you can see that the code continues with the creation of the actual crop from that envmap, so just replace the folder 'crop' to match your dataset structure and it should work.

Hi!

I'm sorry about that, its called crop but inside were envmaps, I cant recall why I named it this way. But you can see that the code continues with the creation of the actual crop from that envmap, so just replace the folder 'crop' to match your dataset structure and it should work.

Hi! I am come back again. I found that you mutiply crop with 'reexpose_scale_factor' and save it as crop in your 'gen_hdrcrops.py', is it a process to convert crop from hdr image to ldr image? Why you don't save '' returned from 'genLDRimage(..)' directly?

EMLight's author tone crop and envmaps in all his 'data.py' code, which i think is to convert crop from hdr to ldr. So why you use 'genLDRimage(..)' in your 'gen_hdr_crops.py'(a Repeat the operation)?

code:

crop = extractImage(envmap_data.data, [elevation, azimuth], cropHeight, vfov=vfov, output_width = cropWidth)
_, reexpose_scale_factor = genLDRimage(crop, putMedianIntensityAt=0.45, returnIntensityMultiplier=True, gamma=gamma)
# save the cropped envmap
imwrite(os.path.join(output_folder, os.path.basename(input_file)), crop * reexpose_scale_factor)

Hope for your reply!

weberhen commented 1 year ago

Hi!

re-exposing is not the same as tonemap: re-expose simply maps the range of a given HDR image to another range, which makes the images more similar. I do that since the dataset has some pretty dark HDR images and some super bright, so I just apply this (invertible) operation to make them be in a similar range. So this script is for that, generate (re-exposed) HDR crops :)

AplusX commented 1 year ago

Once you have access to the depth and HDR data from Laval HDR dataset, you can use the function depth_for_anchors_calc in my forked code: https://github.com/weberhen/Illumination-Estimation/blob/main/RegressionNetwork/data.py

Please let me know if you find some bug :)

Hi! Could you share the depth maps from the Laval HDR dataset? I find the download link (http://indoor.hdrdb.com/UlavalHDR-depth.tar.gz) is broken. Thanks!