jamriska / ebsynth

Fast Example-based Image Synthesis and Style Transfer
https://ebsynth.com
1.5k stars 194 forks source link

Image sequence #5

Open jounih opened 5 years ago

jounih commented 5 years ago

What parameters do I need to use in the CLI to use this with an image sequence? The precompiled tool on ebsynth.com is great but I'd like to use the CLI version with cuda

babon commented 5 years ago

Also interested in this

lazerozen commented 5 years ago

Yes, please enlighten us, I could really use this for art!

aparmar2000 commented 5 years ago

I, too, would greatly appreciate this.

OndrejTexler commented 4 years ago

Hello all! It is a complicated question.

ebsynth.com is a much more complex framework for video stylization (It is not 100% true, but let's say that ebsynth.com uses this repo to do frame-by-frame stylization + it does some other tricks to make video consistent in time)

What you can do is to check this paper: https://dcgi.fel.cvut.cz/home/sykorad/Jamriska19-SIG.pdf, Figure 3 will give you some notion about what guide images you need to use to do the video stylization - so you can do some experiments. But it would be probably very hard to get similar quality results as ebsynth.com.

lazerozen commented 4 years ago

Thanks a lot for the paper. And yes, generating these without help is surely not viable.

balmacefa commented 4 years ago

Does anyone come out with a solution? @OndrejTexler can you provide us the windows executable code, so we can create the CLI solution?

OndrejTexler commented 4 years ago

@balmacefa To the best of my knowledge, there is no open-source/CLI implementation I can share with you currently available

revereche commented 4 years ago

Does anyone come out with a solution? @OndrejTexler can you provide us the windows executable code, so we can create the CLI solution?

The readme gives some guide image examples. The grayscale and mesh deformation aren't too hard (in another thread, Ondrej said the mesh doesn't need to be exact, but if you do want something more precise it shouldn't be hard to set this up with some modifiers in Blender). For the image segmentation, you can use OpenCV: https://docs.opencv.org/master/d3/db4/tutorial_py_watershed.html

Of course, the generated image segmentation might be too messy for your needs. In that case, Blender's camera tracking should let you pin custom shapes and colors over the important bits, and you can blur them to be subtler later.

b4zz4 commented 3 years ago

I wrote this tutorial it has some little flickers but it is quite similar to the interface result https://4232.cf/staying-fluent-with-ebsynth-de-flicker/