🦜 EasyPhoto is a tool for generating AI portraits that can be used to train digital doppelgangers relevant to you.
🦜 🦜 Welcome!
English | 简体ä¸æ–‡
EasyPhoto is a tool for generating AI portraits that can be used to train digital doppelgangers relevant to you. Training is recommended to be done with 5 to 20 portrait images, preferably half-body photos and do not wear glasses (It doesn't matter if the characters in a few pictures wear glasses). After the training is done, we can generate it in the Inference section. We support using preset template images or uploading your own images for Inference.
Please read our Contributor Covenant covenant | 简体ä¸æ–‡.
What's New:
These are our generated results:
Our ui interface is as follows:
train part:
inference part:
We have verified EasyPhoto execution on the following environment:
The detailed of Windows 10:
The detailed of Linux:
We need about 60GB available on disk (for saving weights and datasets process), please check!
# Download and Installation
git clone https://github.com/aigc-apps/EasyPhoto.git
cd EasyPhoto
pip install -r requirements.txt
# launch tool
python app.py
# pull image
docker pull registry.cn-shanghai.aliyuncs.com/pai-ai-test/eas-service:easyphoto-diffusers-py310-torch201-cu117
# enter image
docker run -it --network host --gpus all registry.cn-shanghai.aliyuncs.com/pai-ai-test/eas-service:easyphoto-diffusers-py310-torch201-cu117
# launch
python app.py
The EasyPhoto training interface is as follows:
After clicking Upload Photos, we can start uploading images. It is best to upload 5 to 20 images here, including different angles and lighting conditions. It is best to have some images that do not include glasses. If they are all glasses, the generated results may easily generate glasses.
Then we click on "Start Training" below, and at this point, we need to fill in the User ID above, such as the user's name, to start training.
After the model starts training, the webui will automatically refresh the training log. If there is no refresh, click Refresh Log button.
If you want to set parameters, the parsing of each parameter is as follows:
Parameter Name | Meaning |
---|---|
Resolution | The size of the image fed into the network during training, with a default value of 512 |
Validation & save steps | The number of steps between validating the image and saving intermediate weights, with a default value of 100, representing verifying the image every 100 steps and saving the weights |
Max train steps | Maximum number of training steps, default value is 800 |
Max steps per photos | The maximum number of training sessions per image, default to 200 |
Train batch size | The batch size of the training, with a default value of 1 |
Gradient accumulation steps | Whether to perform gradient accumulation. The default value is 4. Combined with the train batch size, each step is equivalent to feeding four images |
Dataloader num workers | The number of jobs loaded with data, which does not take effect under Windows because an error will be reported if set, but is set normally on Linux |
Learning rate | Train Lora's learning rate, default to 1e-4 |
Rank Lora | The feature length of the weight, default to 128 |
Network alpha | The regularization parameter for Lora training, usually half of the rank, defaults to 64 |
In the field of AI portraits, we expect model-generated images to be realistic and resemble the user, and traditional approaches introduce unrealistic lighting (such as face fusion or roop). To address this unrealism, we introduce the image-to-image capability of the stable diffusion model. Generating a perfect personal portrait takes into account the desired generation scenario and the user's digital doppelgänger. We use a pre-prepared template as the desired generation scene and an online trained face LoRA model as the user's digital doppelganger, which is a popular stable diffusion fine-tuning model. We use a small number of user images to train a stable digital doppelgänger of the user, and generate a personal portrait image based on the face LoRA model and the expected generative scene during inference.
First, we perform face detection on the input user image, and after determining the face location, we intercept the input image according to a certain ratio. Then, we use the saliency detection model and the skin beautification model to obtain a clean face training image, which basically consists of only faces. Then, we label each image with a fixed label. There is no need to use a labeler here, and the results are good. Finally, we fine-tune the stabilizing diffusion model to get the user's digital doppelganger.
During training, we utilize the template image for verification in real time, and at the end of training, we calculate the face id gap between the verification image and the user's image to achieve Lora fusion, which ensures that our Lora is a perfect digital doppelganger of the user.
In addition, we will choose the image that is most similar to the user in the validation as the face_id image, which will be used in Inference.
First, we will perform face detection on our incoming template image to determine the mask that needs to be inpainted for stable diffusion. then we will use the template image to perform face fusion with the optimal user image. After the face fusion is completed, we use the above mask to inpaint (fusion_image) with the face fused image. In addition, we will affix the optimal face_id image obtained during training to the template image by affine transformation (replaced_image). Then we will apply Controlnets on it, we use canny with color to extract features for fusion_image and openpose for replaced_image to ensure the similarity and stability of the images. Then we will use Stable Diffusion combined with the user's digital split for generation.
After getting the result of First Diffusion, we will fuse the result with the optimal user image for face fusion, and then we will use Stable Diffusion again with the user's digital doppelganger for generation. The second generation will use higher resolution.
We've also listed some great open source projects as well as any extensions you might be interested in:
This project is licensed under the Apache License (Version 2.0).