Closed Hyllite closed 10 months ago
on section 6.3 Inference you can use lora for testing lora
# @title ## 6.3. Inference %store -r # @markdown ### LoRA Config # @markdown Currently, `LoHa` and `LoCon_Lycoris` are not supported. Please run `Portable Web UI` instead network_weight = "" # @param {'type':'string'} network_mul = 0.7 # @param {type:"slider", min:-1, max:2, step:0.05} network_module = "networks.lora" network_args = "" # @markdown ### <br> General Config v2 = False # @param {type:"boolean"} v_parameterization = False # @param {type:"boolean"} prompt = "masterpiece, best quality, 1girl, aqua eyes, baseball cap, blonde hair, closed mouth, earrings, green background, hat, hoop earrings, jewelry, looking at viewer, shirt, short hair, simple background, solo, upper body, yellow shirt" # @param {type: "string"} negative = "lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry" # @param {type: "string"} model = "/content/pretrained_model/AnyLoRA.safetensors" # @param {type: "string"} vae = "" # @param {type: "string"} outdir = "/content/tmp" # @param {type: "string"} scale = 7 # @param {type: "slider", min: 1, max: 40} sampler = "ddim" # @param ["ddim", "pndm", "lms", "euler", "euler_a", "heun", "dpm_2", "dpm_2_a", "dpmsolver","dpmsolver++", "dpmsingle", "k_lms", "k_euler", "k_euler_a", "k_dpm_2", "k_dpm_2_a"] steps = 28 # @param {type: "slider", min: 1, max: 100} precision = "fp16" # @param ["fp16", "bf16"] {allow-input: false} width = 512 # @param {type: "integer"} height = 768 # @param {type: "integer"} images_per_prompt = 4 # @param {type: "integer"} batch_size = 4 # @param {type: "integer"} clip_skip = 2 # @param {type: "slider", min: 1, max: 40} seed = -1 # @param {type: "integer"} final_prompt = f"{prompt} --n {negative}" config = { "v2": v2, "v_parameterization": v_parameterization, "network_module": network_module, "network_weight": network_weight, "network_mul": float(network_mul), "network_args": eval(network_args) if network_args else None, "ckpt": model, "outdir": outdir, "xformers": True, "vae": vae if vae else None, "fp16": True, "W": width, "H": height, "seed": seed if seed > 0 else None, "scale": scale, "sampler": sampler, "steps": steps, "max_embeddings_multiples": 3, "batch_size": batch_size, "images_per_prompt": images_per_prompt, "clip_skip": clip_skip if not v2 else None, "prompt": final_prompt, } args = "" for k, v in config.items(): if k.startswith("_"): args += f'"{v}" ' elif isinstance(v, str): args += f'--{k}="{v}" ' elif isinstance(v, bool) and v: args += f"--{k} " elif isinstance(v, float) and not isinstance(v, bool): args += f"--{k}={v} " elif isinstance(v, int) and not isinstance(v, bool): args += f"--{k}={v} " final_args = f"python gen_img_diffusers.py {args}" os.chdir(repo_dir) !{final_args}
on there how to add second network_weight,on kohya readme you can do that by adding more network.lora but i dont know how to do it
on section 6.3 Inference you can use lora for testing lora
on there how to add second network_weight,on kohya readme you can do that by adding more network.lora but i dont know how to do it