Closed sdbds closed 7 months ago
Could you post your training configs for more details?
Here seem to be some alterations between before/after training, you may use a smaller la_strength
to give a larger weight for erasing.
Could you post your training configs for more details? Here seem to be some alterations between before/after training, you may use a smaller
la_strength
to give a larger weight for erasing.
i use default config in repos pikachu...
maybe guidance_scale or la_strength problem
config.yaml
prompts_file: "configs/pikachu/prompt.yaml"
pretrained_model:
name_or_path: "tac_anime.safetensors"
v2: false
v_pred: false
clip_skip: 1
network:
rank: 32
alpha: 1.0
train:
precision: bf16
noise_scheduler: "ddim"
iterations: 2000
batch_size: 1
lr: 0.0002
unet_lr: 0.0002
text_encoder_lr: 1e-05
optimizer_type: "AdamW"
lr_scheduler: "constant"
lr_warmup_steps: 0
lr_scheduler_num_cycles: 1
max_denoising_steps: 50
save:
name: "pikachu"
path: "output/pikachu"
per_steps: 500
precision: bf16
logging:
use_wandb: false
interval: 0
seed: 0
generate_num: 2
run_name: "pikachu"
verbose: false
prompts: ['pikachu', '', 'dog', 'mickey', 'woman']
other:
use_xformers: true
prompt.yaml
- target: "pikachu"
positive: "pikachu"
unconditional: ""
neutral: ""
action: "erase_with_la"
guidance_scale: 4
resolution: 512
batch_size: 1
dynamic_resolution: True
la_strength: 1000
sampling_batch_size: 1
Could you post your training configs for more details? Here seem to be some alterations between before/after training, you may use a smaller
la_strength
to give a larger weight for erasing.
Thank you for replaying,i change la_strength to 10, it works well.
Any suggestion for more erasing effect?
if i used SPM with other object+ pikachu,it will be mixer with them instead of erasing just pikachu
Any suggestion for more erasing effect?
if i used SPM with other object+ pikachu,it will be mixer with them instead of erasing just pikachu
With our posted inference code, the Facilitated Transport (FT) mechanism is used to avoid this phenomenon. If you are using sd-webui for generation, you might just try to adjust the lora_weight in manual. If possible, we will develop a SPM plugin to implement the FT mechanism for webui in the future.
tested with train_xl and train_xl_mem_reduce