-
Dear author,
Thanks for your great efforts maintaining this project.
I have been working on 2D image generation with diffusion for half year, and now would like to extend to video generation.
I …
-
Thanks for this great research and implementation!
I am curious, whether there are any examples out there for video style transfers involving people speaking to a camera, that you would know of. Al…
-
-
I am trying to make a timelapse by using frames from another day to night timlapse video but I am getting the following error message in google colab:
`result path : ./results/
100%|██████████| 1…
-
Hi @john-rocky
I made an iOS app called [Lensto](https://apps.apple.com/in/app/lensto-background-changer/id1574844033) that uses U2net, AnimeGanV2, Style Transfer and many other cool filter and ef…
-
Hi, thanks for sharing your great work!
I'm curious about the style transfer in 2D space and want to learn more about it. Could you share the 2d pose extraction settings? e.g. video resolution, param…
-
Hello
I ported your model into Google Colab and created a fully functional application, including image and video transfer.
Colab:
Chinese and English Dual Language Version updated:
https://col…
-
I'm trying to apply style transfer to a video frame by frame, but when I use multiscale generation the results vary heavily even for images that are almost identical. I tried without multiscale genera…
-
Hi there,
I make large use of @virtualbus while documenting my code. I've noticed that the Documenter tool output is still broken for two or more consecutive virtualbus groups. In particular, only …
-
Is it possible to reduce the difference between the output of the network for two different but very similar inputs?
I'm asking because if I run py webcam.py ... and sit still in front of the webca…