menyifang / DCT-Net

Official implementation of "DCT-Net: Domain-Calibrated Translation for Portrait Stylization", SIGGRAPH 2022 (TOG); Multi-style cartoonization
Apache License 2.0
734 stars 74 forks source link

Design style result not good enough on 10,000 step. #41

Open johndzengpft opened 1 year ago

johndzengpft commented 1 year ago

Hi, I have trained the design style by the following steps with default config:

  1. Run python generate_data.py --style design
  2. Run python extract_align_faces.py and pick 200 style images
  3. Train content calibration network
  4. Generated content calibrated samples
  5. Run geometry calibration for both photo and cartoon
  6. Run python train_localtoon.py with total 19962 source images & 79832 style images

The result of 10,000 step is not as good as official pre-train:     9999_face_result

Did I do any step wrong? Thank you very much!

menyifang commented 1 year ago

First, ensure stylized images fed to CCN are high-quality and style-faithfulness as possible, manual picking may help. Second, more steps may need for sufficient training, we use 30w steps on average.

goldwater668 commented 1 year ago

@menyifang Excuse me, I want to ask you questions:

  1. If you want to train data with a resolution of 1024X1024, do you need 30w epochs, and how long does it take to train on a GPU?
  2. If training is interrupted, how to resume training from the interruption and continue training? 3.The following is the training loss. I don’t know if it meets expectations? 2023-03-30 17-29-40 的屏幕截图

4.The training was interrupted, and the resume_epoch in the configuration file was modified to start from 42999. The loss training 20k has not changed. Is this normal? 2023-03-30 17-34-16 的屏幕截图