I think the biggest possible drawback vs current mirroring/blurring method is that consistency/movement between frames might not be great, as usually is the case in generative video models thus far.
Models trained on each video (or at least a couple of neighboring frames) would perform much better, but it would be incredibly slow and complex
With recent advances in generative image outpainting would it be possible to have the corners "filled" in alpha channel instead of black?
Of course if everything was handled inside vapoursynth it would be even better. There are many open source outpainting options on github: https://github.com/Udit9654/Outpainting-Images-and-Videos-using-GANs https://github.com/basilevh/image-outpainting https://github.com/nanjingxiaobawang/SieNet-Image-extrapolation https://github.com/lkwq007/stablediffusion-infinity/ https://github.com/taabata/LCM_Inpaint_Outpaint_Comfy
I think the biggest possible drawback vs current mirroring/blurring method is that consistency/movement between frames might not be great, as usually is the case in generative video models thus far.
Models trained on each video (or at least a couple of neighboring frames) would perform much better, but it would be incredibly slow and complex