Open lixixin opened 3 years ago
Hi,
Thanks for your interest in my work. I am still organizing the code. Since I just graduated with my PhD, I will be moving and traveling in the coming weeks. So there will be some delay. I apologize for the inconvenience. If there is any urgent request, please directly send an email to dong.lao@kaust.edu.sa.
大佬@donglao ,现在不疫情吗?还能去哪玩耍啊-》^^《-
Hi, the Matlab demo code is already there. I will organize the evaluation toolkit and upload it today.
Hi ,any plans of releasing the python version of the codebase .What changes miust be made to use other optical flow methods other than SOBOLEV Flow used in the paper which inherently takes care of masked region as mentioned in the paper .
Hi ,any plans of releasing the python version of the codebase .What changes miust be made to use other optical flow methods other than SOBOLEV Flow used in the paper which inherently takes care of masked region as mentioned in the paper .
Hi, since Sobolev Flow was originally written in C++ with MatLab wrapper, migrating the codebase to Python is non-trivial. An alternative solution is to use the flow inpainter proposed by TFG (or an edge-guided version from FGVC). I believe those models can propagate optical flow into masked regions. However, there will be no refinement as done by Sobolev Flow.
What changes need to be done to get tye full scene template instead of the cropped version .Can you share the codebase of the full scene template inference if possible . Thank you , Likith P
On Mon, 13 May, 2024, 09:16 donglao, @.***> wrote:
Hi ,any plans of releasing the python version of the codebase .What changes miust be made to use other optical flow methods other than SOBOLEV Flow used in the paper which inherently takes care of masked region as mentioned in the paper .
Hi, since Sobolev Flow was originally written in C++ with MatLab wrapper, migrating the codebase to Python is non-trivial. An alternative solution is to use the flow inpainter proposed by TFG (or an edge-guided version from FGVC). I believe those models can propagate optical flow into masked regions. However, there will be no refinement as done by Sobolev Flow.
— Reply to this email directly, view it on GitHub https://github.com/donglao/videoinpainting/issues/1#issuecomment-2106579811, or unsubscribe https://github.com/notifications/unsubscribe-auth/A6SZ2TSKZPGVB6UTKGH2HHLZCAZPRAVCNFSM5GESQUQKU5DIOJSWCZC7NNSXTN2JONZXKZKDN5WW2ZLOOQ5TEMJQGY2TOOJYGEYQ . You are receiving this because you commented.Message ID: @.***>
Hi, based on the current codebase, you may simply place the input image at the center of a larger "image" and zero pad the boundaries, and adjust the flow field & mask accordingly, so that you will have a scene template larger than the original image. I think I also have the MatLab code for a full-scene template from another project that you may use as well. Please contact my email @.*** if you are interested.
On Mon, May 13, 2024 at 6:10 AM LIKITH P @.***> wrote:
What changes need to be done to get tye full scene template instead of the cropped version .Can you share the codebase of the full scene template inference if possible . Thank you , Likith P
On Mon, 13 May, 2024, 09:16 donglao, @.***> wrote:
Hi ,any plans of releasing the python version of the codebase .What changes miust be made to use other optical flow methods other than SOBOLEV Flow used in the paper which inherently takes care of masked region as mentioned in the paper .
Hi, since Sobolev Flow was originally written in C++ with MatLab wrapper, migrating the codebase to Python is non-trivial. An alternative solution is to use the flow inpainter proposed by TFG (or an edge-guided version from FGVC). I believe those models can propagate optical flow into masked regions. However, there will be no refinement as done by Sobolev Flow.
— Reply to this email directly, view it on GitHub < https://github.com/donglao/videoinpainting/issues/1#issuecomment-2106579811>,
or unsubscribe < https://github.com/notifications/unsubscribe-auth/A6SZ2TSKZPGVB6UTKGH2HHLZCAZPRAVCNFSM5GESQUQKU5DIOJSWCZC7NNSXTN2JONZXKZKDN5WW2ZLOOQ5TEMJQGY2TOOJYGEYQ>
. You are receiving this because you commented.Message ID: @.***>
— Reply to this email directly, view it on GitHub https://github.com/donglao/videoinpainting/issues/1#issuecomment-2107538645, or unsubscribe https://github.com/notifications/unsubscribe-auth/AJ45O22ZODE5MHPENIH47NDZCC3ULAVCNFSM5GESQUQKU5DIOJSWCZC7NNSXTN2JONZXKZKDN5WW2ZLOOQ5TEMJQG42TGOBWGQ2Q . You are receiving this because you were mentioned.Message ID: @.***>
Thank you.I have contacted you over your UCLA email id from my official university email .Looking forward for your response.
Hi, it's an excellent work. When will the code be published?