Open dengtianbi opened 5 months ago
Hi there,
Thanks for your interest in this application! We haven't updated support for inpainting ControlNet code yet (will be added in a couple of weeks).
If you are in urgent need of that model, you can try to implement by some simple modification of our codebase:
During training, instead of giving a sequence of depth/canny frames as input, now give the random block masked frames as input. All other training hyper-parameters can basically remain the same. During inference, you can give a sequence of block masked frames, and the model should be able to recover the object in the masked area according to the guidance of text prompts.
Hi HL-hanlin,
Doing inpainting would be really useful, I'd be great if you could share the inpainting model :)
Hi, I noticed that there are inpainting examples in the project, but it seems they are not included in the code, and I haven't seen any related models available for download. Could you please advise on how to perform video inpainting?