VSAnimator / Sketch-a-Sketch

Controlling diffusion-based image generation with just a few strokes
https://vsanimator.github.io/sketchasketch/
MIT License
56 stars 1 forks source link

What is the timeline for code release? #1

Closed ninjasaid2k closed 11 months ago

ninjasaid2k commented 11 months ago

Can you tell me if you're planning on releasing the code?

VSAnimator commented 11 months ago

The inference code is present in demo.py and in the Colab notebook.

To train your own ControlNet for Sketch-a-Sketch 1) run HED edge detection on a text-image dataset (I used 50000 images from https://huggingface.co/datasets/laion/laion-art), 2) vectorize using https://github.com/MarkMoHR/virtual_sketching, 3) delete random subsets of strokes from the vectorized image, and 4) use the standard diffusers ControlNet training script (https://github.com/huggingface/diffusers/blob/main/examples/controlnet/train_controlnet.py).

I'll put the dataset itself on HuggingFace soon. Not sure yet when I'm releasing the actual dataset-creation script.