[2024.07.01] - Inference code is now available.
[2024.07.01] - Hugging Face Online demo is available here!
[2024.06.30] - Our Online demo is available here!
The Figure introduces the whole framework of our method, including data collection, training pipeline, and inference pipeline. In the data collection phase, we leveraged the open-source CapOnImage2M dataset, selecting a subset of 1M images. For each selected image, we employed a Visual Language Model (e.g., CogVLM) to generate textual descriptions, thereby obtaining prompts associated with the images. We applied the canny algorithm to extract edges from text regions within the images, creating a canny map. The training pipeline comprises three primary components: the latent diffusion module, the Font ControlNet module, and the loss design module. More precisely, during training, the raw image, canny map, and prompt are fed into the Variational Autoencoder (VAE), Font ControlNet, and text encoder, respectively. The loss function is bifurcated into two segments: the latent space and the pixel space. Within the latent space, we utilize the loss function $L{LDM}$ associated with Latent Diffusion Models as outlined in the source paper. The latent features are then decoded back into images via the VAE decoder. Within the pixel space, the text regions of both the predicted and the ground truth images are cropped and processed through an OCR model independently. We extract the convolutional layer features from the OCR model and compute the Mean Squared Error (MSE) loss between the features of each layer, thereby constituting the loss $L{ocr}$. During the inference phase, the image prompt, textual content, and specified areas for text generation are input into the text encoder and Font ControlNet, respectively. The final image is then generated by the VAE decoder.
# Initial a conda enviroment
conda create -n joytype python=3.9
conda activate joytype
# Clone joytype repo
git clone ...
cd JoyType
# Install requirements
pip install -r requirements.txt
[Recommend]: We already released a demo on JDHealth and HuggingFace!
you can run with this code to infer:
python infer.py --prompt "a card" --input_yaml examples/test.yaml --img_name test
You can see more arguments by:
python infer.py --help
Please note that the model will be pulled from Hugging Face by default, if you want to load it locally, please pre-download the model from here and modify the argument: --load_path.