vt-vl-lab / 3d-photo-inpainting

[CVPR 2020] 3D Photography using Context-aware Layered Depth Inpainting
https://shihmengli.github.io/3D-Photo-Inpainting/
Other
6.88k stars 1.11k forks source link

📣 Do not waste your time with this old repo; here is the working one! #211

Open NeoAnthropocene opened 9 months ago

NeoAnthropocene commented 9 months ago

Hi guys, I spent my hours with this repo. First it worked and then something happened and I can not run the script now.

Hear my advice; do not waste your time with struggling this old repo which is quite dead.

This is an extension plugin link below for A1111. If you're interested with this kind of image creation, you're highly possible to get your hands on A1111.

The plugin works flawlessly without any setup hassle 🔥

🔗 https://github.com/thygate/stable-diffusion-webui-depthmap-script

👉 This is the example that shows how you can use.

âš  Please come and write feedback here after you had been successful, so any other people can also benefit from this info. Thanks.

fernando1999smith commented 9 months ago

Hello, NeoAnthropocene!

Explain us clearly, please. What this project about? I've already build 2 depth mask PNG files for one source photo in Google Collab. I'm not familiar about this topic. What should I do the next step. This project was pretty simply (3d-photo-inpainting). It helped me to make a short mp4 video automatically in Collab environment. What about this one? If you suggest alternative way you probably should give anyone a short explanation! Thank you in advance!

fernando1999smith commented 9 months ago

I don't have (as many others people also) a powerful computer with GPU. That's why I used to current project '3d-photo-inpainting' for Goggle Colab. Is it possible to use the same for new project you are suggesting? If it is, please share your link for Colab. Thanks.

NeoAnthropocene commented 9 months ago

I don't have (as many others people also) a powerful computer with GPU. That's why I used to current project '3d-photo-inpainting' for Goggle Colab. Is it possible to use the same for new project you are suggesting? If it is, please share your link for Colab. Thanks.

It seems you can but I never tried.

Can I run this on Google Colab?

NeoAnthropocene commented 9 months ago

Hello, NeoAnthropocene!

Explain us clearly, please. What this project about? I've already build 2 depth mask PNG files for one source photo in Google Collab. I'm not familiar about this topic. What should I do the next step. This project was pretty simply (3d-photo-inpainting). It helped me to make a short mp4 video automatically in Collab environment. What about this one? If you suggest alternative way you probably should give anyone a short explanation! Thank you in advance!

This project is a Depth plugin (addon) for A1111, and it has 3D photo inpainting mode derived from this repo. And the repo can also be run in standalone mode (I never tried it but I'm also willing to use it in standalone mode).

It worked well and I created some good marketing assets for a campaign project with the images 1080*1920 sizes. Here you can see from examples from the links 1, 2.

fernando1999smith commented 9 months ago

Hi! Thanks for reply! Then I've made a decision that this project isn't developed for Colab and it's suitable for standalone computer with GPU. Unfortunately, it's not a solution for me. What can I say..I have different expectations though I saw 2 examples were given by you as the examples with an uncover interest (just to compare its results with old ones). In my perspective, the old original project had more deep quality of movement for background layer in paralax mode (linear zoom-in/zoom-out). I'm afraid, but I wasn't impressed the results at all. It would be better to use Adobe AfterEffects in manual mode to achieve the more strongest result than it is. But it takes lots of time to spent for 1 photograph. Examples (russian lang): https://www.youtube.com/watch?v=ZPSX3ouYFqM Unbeliavable result here: https://www.youtube.com/watch?v=YXRiTMJ6HR0

NeoAnthropocene commented 9 months ago

Hi! Thanks for reply! Then I've made a decision that this project isn't developed for Colab and it's suitable for standalone computer with GPU. Unfortunately, it's not a solution for me. What can I say..I have different expectations though I saw 2 examples were given by you as the examples with an uncover interest (just to compare its results with old ones). In my perspective, the old original project had more deep quality of movement for background layer in paralax mode (linear zoom-in/zoom-out). I'm afraid, but I wasn't impressed the results at all. It would be better to use Adobe AfterEffects in manual mode to achieve the more strongest result than it is. But it takes lots of time to spent for 1 photograph. Examples (russian lang): https://www.youtube.com/watch?v=ZPSX3ouYFqM Unbeliavable result here: https://www.youtube.com/watch?v=YXRiTMJ6HR0

Oh, if you're referring to the depth of the parallax effect, I intentionally designed it that way. I aimed to create a Vertigo Effect with the images. However, it becomes distorted when you increase the intensity of the effect.

Nevertheless, you can't achieve the quality of a homemade After Effects style with the both github repos. It's a completely different approach. With these AI github repos and a powerful graphics card, you can accomplish it in just 15 minutes. In AE, though, achieving the same level of quality and style on the videos that you showed me, would require significantly more time, especially if you don't have expertise in this type of effect. The choice depends on your specific style and quality requirements for your project.

Cheers.