Closed Ainaemaet closed 6 months ago
Hi, thanks for reaching out!
but as the title says, under the section "Grab the latest DepthFlow Release for your platform, run it" there is no release(s) available.
You're right, there's no releases available yet, I forgot to strikethrough it on the readme when I was re-formatting to match the formatting with other projects
The reason why there is no binaries is that the code was only configurable inside Python itself before I made a quick and hacky gui using Gradio (which I commited the half baked code yesterday) as seen below:
The code.. works, just isn't ideal to work with, the camera shake and roll parameters are hard coded for a binary but modifiable on the script, if you can please run it from the source code at the moment
I saw opportunity for a release and was preparing for it but two issues popped up:
I just needed something that worked for a project with my friend at the time so I didn't invest further into making it a "software", currently I'm trying to make a definitive version of another project's code base which will be used by DepthFlow, that being the shader renderer "framework" of ShaderFlow, since DepthFlow is basically a shader with two input images it's just one of the many things ShaderFlow will be able to do, with a much more solid and configurable "video-editor" like and reactive configurations
As well, are there any examples available for what exactly this does? Going by what's provided it sounds like it should take an image, generate a depth map and then animate it - but how exactly?
Related to the previous issue, I diverged paths to work on the other project and didn't had the time to demonstrate something definitive, but surely I can give you examples here!
For depth estimation I'm currently using ZoeDepth which I found to be accurate and edge-smooth enough for this shader, you can toy with input images and their depth estimation on this HuggingFace space
Here's a demo with some random image I prompted on MidJourney:
Camera Focus=1
https://github.com/BrokenSource/DepthFlow/assets/29046864/3cad77bc-3315-446c-94f5-06735958e276
Camera Focus=0
https://github.com/BrokenSource/DepthFlow/assets/29046864/aab791b6-51a4-4703-9467-9e21f7fb05df
And here's the input image and calculated depth map of it:
Input Image:
Depth Map:
It itches me to leave this project in this half baked state, but I really want to rebase it on a much better shader renderer backend, which will allow us, for example, to easily add some post-fx layer like vignette, lens distortion, maybe some pre-made particles shader, make it react to music, have a proper camera to apply dolly-zoom like effects instead of just zoom and movement
The "text-to-video" part might get postponed or canceled entirely, that's just not how Stable Diffusion works I learned, it's doable just that more basic stuff needs to be working before we move on
Anyways, it should work running from the source code for the time being as a proof of concept, please get in touch with me if you have any other issues or questions!
Thank you for the concise and in-depth explanation.
I haven't had the chance to try any of the code yet, but the parallax movement looks great and I'm looking forwards on seeing what comes of it all.
Hi @Ainaemaet, an update:
I'm closing this as the "Precompiled binaries" section isn't on the Readme anymore - until I reimplement the releases logic, and manage to make them work properly in standalone executables
The project is becoming less dependent on the main scene script file itself, as I'm planning to have presets with their own configuration directly on the CLI and on the realtime window itself
By the way, the Parallax effect on the videos above were from the original 2D-only code, I've done the math and updated the shader to fully work under a 3D camera - perspective, isometric, zoom and all, see the main readme for the newer results!
Hello, I would like to try this project but as the title says, under the section "Grab the latest DepthFlow Release for your platform, run it" there is no release(s) available.
As well, are there any examples available for what exactly this does? Going by what's provided it sounds like it should take an image, generate a depth map and then animate it - but how exactly?