abdallahdib / NextFace

A high-fidelity 3D face reconstruction library from monocular RGB image(s)
GNU General Public License v3.0
702 stars 90 forks source link

Render resolution is 512 #31

Open Michaelwhite34 opened 2 years ago

Michaelwhite34 commented 2 years ago

I notice that render and debug output all have resolution 512, does that mean per-pixel basis is based on this 512 image ? I just tried 2048 uv and feel that the output is blurry. Should I increase maxRes in image.py ? I have changed all 256/512 to 2048, not sure how will that play out.

Michaelwhite34 commented 2 years ago

The processing is not over yet (it takes around 12 hours on cpu), currently at step3. The render is still noisy even if I set render sample to 20000, does that mean I need to use a bigger number? Or I need to change another parameter? debug2_iter60_frame0

Michaelwhite34 commented 2 years ago

When I import the diffence map into photoshop and apply mean filter, the mean loss won't drop from step 3 30. No matter how I adjust these parameters, the debug render is always noisy.

abdallahdib commented 2 years ago

hi the debug renderer is always noisy as during optimization we use only 8 samples per pixel (for faster optimization) after the optimization is finished the final image is rendered at high number of samples please refer to these parameters in optimConfig.ini:

rtSamples = 4000 #the number of ray tracer samples to render the final output (higher is better but slower) best value is 20000 but on my old gpu it takes too much time to render. if u have nvidia rtx u are fine enjoy :) rtTrainingSamples = 8#number of ray tracing to use during training

Michaelwhite34 commented 2 years ago

hi the debug renderer is always noisy as during optimization we use only 8 samples per pixel (for faster optimization) after the optimization is finished the final image is rendered at high number of samples please refer to these parameters in optimConfig.ini:

rtSamples = 4000 #the number of ray tracer samples to render the final output (higher is better but slower) best value is 20000 but on my old gpu it takes too much time to render. if u have nvidia rtx u are fine enjoy :) rtTrainingSamples = 8#number of ray tracing to use during training

Yeah, after a day I figured that out, I think that default 8 sample is too low, I used 100 sample the result is great.But for now I have difficulty reproducing the same render result in blender.I guess it's either because render is using phone reflection or it handles environment texture differently from blender.

Michaelwhite34 commented 2 years ago

I took a screen shot of your video landmarks0 debug2_iter0_frame0 envMap_0 The color doesn't even match.

Michaelwhite34 commented 2 years ago

Ok, I think I have found the rotation, but there is still difference in the render Here is the comparsion render . blender filmic render_standrad_color blender standard debug2_iter0_frame0 debug render I just use principle bsdf with direct diffuse roughness and specular input.Environment map has strength 1.

abdallahdib commented 2 years ago

Hi Michael, the estimated maps (diffuse specular albedo) should be compatible with rendering engines (such as blender). I am wondering whatever blender filmic or standard adds any post processing to the final output. Also did u verify if any gamma correction is needed for blender? it seems that blender standard is closer to the output of next face than filmic.

Also I would like to know how did u use the environment map in blender to render the final mesh. There is a github issue who tried to to the same here (https://github.com/abdallahdib/NextFace/issues/6) can u please share more information on this (a video or explanation on how to do it would be great so that i can add it to the readme of the library)

Michaelwhite34 commented 2 years ago

Hi Michael, the estimated maps (diffuse specular albedo) should be compatible with rendering engines (such as blender). I am wondering whatever blender filmic or standard adds any post processing to the final output. Also did u verify if any gamma correction is needed for blender? it seems that blender standard is closer to the output of next face than filmic.

Also I would like to know how did u use the environment map in blender to render the final mesh. There is a github issue who tried to to the same here (#6) can u please share more information on this (a video or explanation on how to do it would be great so that i can add it to the readme of the library)

It might be the environment resolution. I just use a 4096 environment map on another example ,it looks much better.Please add option to directly export specific map from pickle file, and add time estimation for that, sometimes I really have no idea it will take hours, which makes a lot anxiety.

abdallahdib commented 2 years ago

the environment maps is already exported as an exr or png file (refer to the config file) which environment map are did u use for blender ?

Michaelwhite34 commented 2 years ago

the environment maps is already exported as an exr or png file (refer to the config file) which environment map are did u use for blender ?

I mean I change the export environment resolution setting to 4096, it is 64 by default.Also, we should get camera focal length so that it will be easier to get the same render box as debug render.

abdallahdib commented 2 years ago

well if u change to 4096 this resolution will be used for the env map during optimization as well. not sure if u need that resolution. what did u get with the 64x64 resolution with blender?

yes indeed the focal length is inside the pickle file

Michaelwhite34 commented 2 years ago

well if u change to 4096 this resolution will be used for the env map during optimization as well. not sure if u need that resolution. what did u get with the 64x64 resolution with blender?

yes indeed the focal length is inside the pickle file

Yeah, but it will be easier to read if you print that in teriminal,at the beginning for example.

abdallahdib commented 2 years ago

will add an export of the estimated camera parameters into a text file.

Michaelwhite34 commented 2 years ago

will add an export of the estimated camera parameters into a text file.

And this ... add option to directly export specific map from pickle file, and add time estimation for that and maybe add option to resume optimization at certain stage (2 or 3)

abdallahdib commented 2 years ago

textures are already saved in the output directory. u can already resume optimization from a given pickle file (use the flag --checkpoint) and u can skip step 1 and/or 2 and/or 3 by adding these flags: --skipStage1 --skipStage2 --skipStage3

abdallahdib commented 2 years ago

plz refer to the main function of optimizer.py

Michaelwhite34 commented 2 years ago

textures are already saved in the output directory. u can already resume optimization from a given pickle file (use the flag --checkpoint) and u can skip step 1 and/or 2 and/or 3 by adding these flags: --skipStage1 --skipStage2 --skipStage3

Sometimes we want to use different setting for texture, so we can just use --usecheckpoint --skipStage1 --skipStage2 --skipStage3 at the same time to start from the export process ? But we may only want to export specific maps only.

abdallahdib commented 2 years ago

if u use --usecheckpoint --skipStage1 --skipStage2 --skipStage3, it will only load the pickle file and save the output (no optimization is done then) i m not sure if this is useful. U an always customize the code to fit ur specific needs.

Michaelwhite34 commented 2 years ago

if u use --usecheckpoint --skipStage1 --skipStage2 --skipStage3, it will only load the pickle file and save the output (no optimization is done then) i m not sure if this is useful. U an always customize the code to fit ur specific needs.

Yeah, that will be nice. Will you add option to export specific map only ?, for me I can read some code but honestly it's difficult for me to modify for my usage.

Michaelwhite34 commented 2 years ago

Hi Michael, the estimated maps (diffuse specular albedo) should be compatible with rendering engines (such as blender). I am wondering whatever blender filmic or standard adds any post processing to the final output. Also did u verify if any gamma correction is needed for blender? it seems that blender standard is closer to the output of next face than filmic.

Also I would like to know how did u use the environment map in blender to render the final mesh. There is a github issue who tried to to the same here (#6) can u please share more information on this (a video or explanation on how to do it would be great so that i can add it to the readme of the library)

For obj import with forward -Z, up Y. Rotation X 90, Y 180. Camera rotation X -90, Y 180. For environment map, rotate along Z 90, rotate along X 180, that should work.

Michaelwhite34 commented 2 years ago

if u use --usecheckpoint --skipStage1 --skipStage2 --skipStage3, it will only load the pickle file and save the output (no optimization is done then) i m not sure if this is useful. U an always customize the code to fit ur specific needs.

Just tried it, it outputs nothing in this way after waiting for a long time.

Michaelwhite34 commented 2 years ago

When I tried to export 8192 environment, it says cpu out of memory. I have 64G memory though.

Michaelwhite34 commented 2 years ago

Hi Michael, the estimated maps (diffuse specular albedo) should be compatible with rendering engines (such as blender). I am wondering whatever blender filmic or standard adds any post processing to the final output. Also did u verify if any gamma correction is needed for blender? it seems that blender standard is closer to the output of next face than filmic.

Also I would like to know how did u use the environment map in blender to render the final mesh. There is a github issue who tried to to the same here (#6) can u please share more information on this (a video or explanation on how to do it would be great so that i can add it to the readme of the library)

Can you render in redner without device color transform and color space conversion (raw)? That way we can actually compare the difference render between blender and redner

Michaelwhite34 commented 2 years ago

redner uses phong reflection, which is outdated for modern render engines, it might be the reason that we can't get similar result.

Michaelwhite34 commented 2 years ago

render__11 blender light filmic debug light debug light

Michaelwhite34 commented 2 years ago

By the way is the exported env map expected to be directly used as environment color map? As I see the debug render doesn’t have visible colored reflection. Also, do you know the display color space and viewpoint transform for redner ? It might make sense to compare redner raw output and blender raw output. I tried to use output texture in redner using colab, with only diffuse texture, roughness=1,specular=0.The result is still very different from blender. After thinking for a while, I believe in order to get consistent render results in different engines, we need to use similar render algorithm like principle bsdf and disney bsdf, and consider all things in linear space (converted linear space photo, export linear space texture and env) and convert to sRGB in debug render for loss preview.

Refl-ex commented 1 year ago

I just use principle bsdf with direct diffuse roughness and specular input.Environment map has strength 1.

Can you share the blender node structure? My result is so different from origin using direct diffuse roughness and specular input of principle bsdf.