Open dmarx opened 2 years ago
maybe we could utilize something similar to the system used for the audioreactivity? Maybe turn this generic with a "data stream" object that could wrap init image, video source, audio input, pre-computed depth/flow, whatever.
could facilitate #139
Adding in the concepts from 122: Setting to allow extra detailing for depth maps (Assuming it currently takes a fixed amount of time and a longer iteration/processing buffer could be instituted) Ability to export depth maps for other art purposes (Blender, Depthy) Setting to feed depth map visual data in for depth=contrast (could look cool?) Same but with purple-orange depth map (Makes images 3d in chromadepth glasses)
Setting to allow extra detailing for depth map
can you elaborate? I'm not sure what you're describing here.
Ability to export depth maps for other art purposes (Blender, Depthy)
already tracking this in a separate issue, see #191
Setting to feed depth map visual data in for depth=contrast
what do you mean by depth=contrast? Is this a separate effect from depth you're describing here? an alternative way of interpreting depth?
Same but with purple-orange depth map (Makes images 3d in chromadepth glasses)
If I understand correctly, I think you're talking about processing depth so the output has a 3D effect if a user looks at it with 3D glasses, yeah? That woudl definitely be a separate issue, could possibly make it a sub-experiment for #190 (which was more targeting magic-eye effects, but 3D glasses might be a more user-friendly or maybe even more inclusive approach)
can you elaborate? I'm not sure what you're describing here.
So I haven't seen the depth maps that Pytti produces, but from how the motion system looks, its giving the image a once over, making a depth map of the general shapes, applying the 3d transition along it, then taking another glance. If the system could be forced to take more time and generate more precise detail, the resulting depth map would be more complex and the resulting motion would be more intense. This would also make any exported depth maps more detailed for their purposes.
what do you mean by depth=contrast? Is this a separate effect from depth you're describing here? an alternative way of interpreting depth?
If the depth map output frame could be fed back in as a direct input to the next frame, the next resulting frame would have areas that were darker the deeper in they went and brighter the closer they were. Here's a mockup of what that might look like. Animated, I believe it would create a very striking image.
If I understand correctly, I think you're talking about processing depth so the output has a 3D effect if a user looks at it with 3D glasses
Sort of. Chromadepth is a weird refracting lens system, not like red-blue at all. It's a pair of plastic glasses with a clear shiny plastic lens, and light hits and refracts at different rates. Red comes forward, then yellow, then green, then teal, then blue, then purple (where purple is deeply receded). There's a local artist that paints wearing the glasses, so her pieces are 3d, but if you aren't wearing them they're just beautiful and colorful, with no weird distortions. If someone's made a chromadepth depth map generator, that'd be optimum, but the orange-purple map also carries the effect well.
the resulting depth map would be more complex
Ok I think I get what you're saying now. I think there might be some parameters on the depth model (AdaBins) I could expose that would give you more control over the depth map estimation process. There's also other depth models we could play with, e.g. disco uses MiDaS which is definitely among the SOTA. There's also a technique for blending depth maps of different resolutions to balance near and far detail. I've been meaning to make the depth and flow models more configurable, so this could definitely be part of that.
next resulting frame would have areas that were darker the deeper
I can't remember what it's called, but I remember stumbling across a model that does this more directly, i.e. accentuates shadows, so basically what you're describing but with more respect for lighting and less of a heuristic. I'll see if I can't dig that up for you to at least play with as a post-processing step. I think maybe Aphantasia uses something like this?
oh interesting I'll have to learn more about this!
here we go, this thing: https://github.com/simeonradivoev/NNAO
Neat, and we should definitely test it, but the effect I'm thinking of specifically reflects depth. As a direct example , the deeper the tunnel goes, the darker it gets. This could generate a lot of depth of field contrast and strongly enhance the effect of the far_plane setting