Closed jet3004 closed 2 years ago
Hi,
I was looking for a way to generate depthmaps from a single image for a while, to display on a lookingglass portrait 3d holographic display. I knew it was possible to do this since they are offering it as a paid service.
Upon discovering the MiDaS papers and code, I ran a few tests and found the results to be very good.
I then wanted to be able to generate images in the stable-diffusion-webui, automatically generate a depthmap and display it on my device. So i created this script.
I have a Unity3D project, and a html viewer using threejs displaying the images in 3D on my device as they are generated in the webui.
A plane with a displacement modifier is used to render the image on. This technique can also be used in 3D packages like for instance blender. I might add a small demo later.
A VR Headset is another way of viewing it in 3D, again using the same technique. There are existing Apps to load and display rgbd images.
Please move this to the Discussions section, I will close this Issue as Completed.
Please feel free to ask questions, share ideas, make comments, and share results ! by starting a discussion.
Thanks for sharing.
I really appreciate the reply, very cool projects and certainly inspiring. My apologies, late night quick reply and was just on autopilot posting here.
This is very cool, thanks. I was making some 'fake ones' natively but, obv, this is better. Can you point me and others to any resources that might show one or a few distinct possibilities of what one can do with these generated files? Very curious after seeing some of your examples and just not sure where to start. Thank you!