nagadomi / nunif

Misc; latest version of waifu2x; 2D video to stereo 3D video conversion
MIT License
1.61k stars 147 forks source link

[Visual Enhancement]Foreground/Background Depth Map Object Blur #264

Open shinra358 opened 19 hours ago

shinra358 commented 19 hours ago

I would like to ask of some option indicated in the picture to provide blur in the frontmost object and on the backmost plane decided by the color on the depthmap, then it would be transferred to the corresponding image on said depthmap RGB color with a gradient range. Kinda like DOF. Starting color box will be a user configurable R,G,B. No number would be different so only 1 number can be indicated if need be and it will represent the Red, Green, Blue automatically. Blur type would include drop down box of different types of blurs like gaussian or bokeh. A blur strength option (one for the fg and one for the bg), not indicated in pic, would of course dictate how hard or soft the blurs are on the final image and be allocated to the color strengths on the depth map gradient. The closer to the focus, the lighter the blur becomes. The further from the focus, the heavier the blur would become indicated by the blur strength option selection. so if a user inputs 25, no blur will be a 25 depth color but in between 25 and 0, blur will get stronger between the range of 25 and 0 (or 230 to 255 if user inputs 230 for the back):

DoF idea

Then our images/videos can look like this to mask the double vision on the plane that Foreground Scale can cause if it isn't 0:

triangle_strategy

nagadomi commented 11 hours ago

Where the camera focus is depends on the scene. I have no motivation to add the ability to edit video/image content subjectively in iw3.

I also think that the blur in the 2D image and the blur on the 3D are not the same thing. If you focus your viewpoint on the center it should be possible to achieve that. In VR video player, screen edge distortion is less noticeable when using curved screen.

nagadomi commented 11 hours ago

I don't remember clearly, but I think there was a technique to fix the problem of double vision at the edges of the screen by turning the left and right cameras inward, so that the left right images are aligned at the edges of the screen.

shinra358 commented 10 hours ago

setting the ip offset to 1 or 2 depending on the image can remedy the issue for the middle and far planes once adjusting the tv focal point to the far right. however the frontmost plane will always have the dual vision nomatter what because that's how real vision works without the glasses. This dof type method will help take ones mind off of it so that the eye wont keep getting magnetized to it.

negative-foreground-scale

If you look at the first image of the dog, the grass in front of the dog is a 255,255,255 solid white. The dog's white is not the same rgb of white. It's something like 200,200,200. so if you target the solid white with the blur, the blur will not affect the other whites. That is how you can get the blur to perfectly replicate on the image just like a 3D image.

And the black bg is 0,0,0 - a solid black. So if you use a blur code to target the rgb of 0,0,0, only that plane in pure black will be affected by blur. And that is how you get the image to look like that of the screen shot of that game.

nagadomi commented 8 hours ago

In the following example, will the dog get blurry? Do we need to manually adjust all frames?

dog_font dog_font


You can use the export feature to output the depth and use it as blur weight in other image editing software. Here is an example of editing using GIMP. dog_dof1 dog_dof2

Then the edited image can be imported as RGB frame. For video, similar editing should be possible with DaVinci Resolve, but I have not used it.