IrisRainbowNeko / HCP-Diffusion

A universal Stable-Diffusion toolbox
Apache License 2.0
893 stars 75 forks source link

如何配置,能让lora和text inversion同时训练, #31

Closed jiafengshen closed 9 months ago

IrisRainbowNeko commented 10 months ago

继承lora的配置文件,添加embedding训练配置即可:

_base_:
  - cfgs/train/examples/lora_conventional.yaml

# 此处训练的embedding需要提前创建,在data的word_names中配置填充到prompt中。
tokenizer_pt:
  train: # prompt tuning embeddings
    - { name: 'pt-cat1', lr: 0.0025 }

# 与lora_conventional.yaml中的一致,可以仅修改需要改变的部分
data:
  dataset1:
    batch_size: 4
    cache_latents: True

    source:
      data_source1:
        img_root: 'imgs/'
        prompt_template: 'prompt_tuning_template/object.txt'
        caption_file: null # path to image captions (file_words)

        word_names:
          pt1: pt-cat1

    bucket:
      _target_: hcpdiff.data.bucket.RatioBucket.from_files # aspect ratio bucket
      target_area: ${hcp.eval:"512*512"}
      num_bucket: 5

embedding的lr_scheduler也可以单独配置,和lora部分的独立。

train:
  scheduler_pt:
    name: 'constant_with_warmup'
    num_warmup_steps: 50
    num_training_steps: 1000