Closed trajano closed 1 month ago
MiDaS does not take multiple input.
Due to restrictions I am using a small model (50MB) which results in lower quality depthmaps. You can try using higher quality models from online and try replacing the depthmap image.
Perhaps provide an option or link to pull them on the app? I am guessing it's something around here https://github.com/isl-org/MiDaS?tab=readme-ov-file but even if I download the .pt file I am not sure where it is supposed to go
Need to be converted to ONNX file, have to build it yourself: https://github.com/lewiji/MiDaS
What I mean is you can use other tools and algorithm to generate depthmap file and replace the one in wallpaper: https://github.com/thygate/stable-diffusion-webui-depthmap-script
small model (50MB)
For a while I use other tool to replace generated depth map. And just today I found Upgraded-Depth-Anything-V2 and even their small model (48 mb) is much better than current MiDaS blob output. Is it possible to change the model that bundled with Lively?
Someone will have to implement the algorithm first before Lively can use: https://github.com/rocksdanister/lively/tree/core-separation/src/Lively/Lively.ML/DepthEstimate
Is your feature request related to a problem? Please describe. It would be nice to help the AI by giving it multiple images, but still designate a primary image and let it use that to refine the output.
Describe the solution you'd like Open customize and allow me to attach multiple images after I provide the first one.
Additional context I play around with Blender so I can take different angled shots.