Extraltodeus / multi-subject-render

Generate multiple complex subjects all at once!
369 stars 26 forks source link

AttributeError: 'Block' object has no attribute 'drop_path' #54

Open zethriller opened 1 year ago

zethriller commented 1 year ago

Note: this may be me not knowing how to use it, please explain if needed - this is a very basic test, haven't found either how to position foreground items.

Testing extension with background + 2 foreground characters Model: dynavisionXL, image size 832x1216 Settings: image

After generating correctly the background and the two foreground images, preview dissapears and an error shows up instead: "AttributeError: 'Block' object has no attribute 'drop_path' "

Traceback:

Traceback (most recent call last):
      File "F:\automatic1111\stable-diffusion-webui\modules\call_queue.py", line 57, in f
        res = list(func(*args, **kwargs))
      File "F:\automatic1111\stable-diffusion-webui\modules\call_queue.py", line 36, in f
        res = func(*args, **kwargs)
      File "F:\automatic1111\stable-diffusion-webui\modules\txt2img.py", line 52, in txt2img
        processed = modules.scripts.scripts_txt2img.run(p, *args)
      File "F:\automatic1111\stable-diffusion-webui\modules\scripts.py", line 601, in run
        processed = script.run(p, *script_args)
      File "F:\automatic1111\stable-diffusion-webui\extensions\multi-subject-render\scripts\multirender.py", line 267, in run
        foreground_image_mask = sdmg.calculate_depth_map_for_waifus(foreground_image)
      File "F:\automatic1111\stable-diffusion-webui\extensions/multi-subject-render/scripts/simple_depthmap.py", line 149, in calculate_depth_map_for_waifus
        prediction = model.forward(sample)
      File "F:\automatic1111\stable-diffusion-webui\repositories\midas\midas\dpt_depth.py", line 166, in forward
        return super().forward(x).squeeze(dim=1)
      File "F:\automatic1111\stable-diffusion-webui\repositories\midas\midas\dpt_depth.py", line 114, in forward
        layers = self.forward_transformer(self.pretrained, x)
      File "F:\automatic1111\stable-diffusion-webui\repositories\midas\midas\backbones\beit.py", line 15, in forward_beit
        return forward_adapted_unflatten(pretrained, x, "forward_features")
      File "F:\automatic1111\stable-diffusion-webui\repositories\midas\midas\backbones\utils.py", line 86, in forward_adapted_unflatten
        exec(f"glob = pretrained.model.{function_name}(x)")
      File "<string>", line 1, in <module>
      File "F:\automatic1111\stable-diffusion-webui\repositories\midas\midas\backbones\beit.py", line 125, in beit_forward_features
        x = blk(x, resolution, shared_rel_pos_bias=rel_pos_bias)
      File "F:\automatic1111\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
        return forward_call(*args, **kwargs)
      File "F:\automatic1111\stable-diffusion-webui\repositories\midas\midas\backbones\beit.py", line 102, in block_forward
        x = x + self.drop_path(self.gamma_1 * self.attn(self.norm1(x), resolution,
      File "F:\automatic1111\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1614, in __getattr__
        raise AttributeError("'{}' object has no attribute '{}'".format(
    AttributeError: 'Block' object has no attribute 'drop_path'
Ereshkigal0 commented 1 year ago

Having the exact same issue & haven't been able to solve it yet, would be nice if someone could chime in to tell us noobs what we're doing wrong, or if it's just broken lol

Extraltodeus commented 1 year ago

ouch I'm not using A1111 anymore and right now I'm not sure what went wrong. I'm sorry that you guys can't use it. I will take a look into it during the upcoming month if possible!

getsmartt commented 10 months ago

It does function with the Midas model, although I can not get a suitable image out of it, but the script appears to be broken with the other models.

leomaxwell973 commented 8 months ago

Changing the lines of code at : automatic1111\repositories\midas\midas\backbones\beit.py", line 102, in block_forward -

FROM

        x = x + self.drop_path(self.attn(self.norm1(x), resolution, shared_rel_pos_bias=shared_rel_pos_bias))
        x = x + self.drop_path(self.mlp(self.norm2(x)))
    else:
        x = x + self.drop_path(self.gamma_1 * self.attn(self.norm1(x), resolution,
                                                        shared_rel_pos_bias=shared_rel_pos_bias))
        x = x + self.drop_path(self.gamma_2 * self.mlp(self.norm2(x)))

TO:

        x = x + self.drop_path1(self.attn(self.norm1(x), resolution, shared_rel_pos_bias=shared_rel_pos_bias))
        x = x + self.drop_path2(self.mlp(self.norm2(x)))
    else:
        x = x + self.drop_path1(self.gamma_1 * self.attn(self.norm1(x), resolution,
                                                        shared_rel_pos_bias=shared_rel_pos_bias))
        x = x + self.drop_path2(self.gamma_2 * self.mlp(self.norm2(x)))

(manually adding path#s is really all.)

Seems to brute force a fix however, it seems the brute forcing of it this way causes a memory leak as it will go from successful runs, to OOM exceptions immediately before it even renders 1 pre-image.

I honestly don't know a lot of python, just know how to follow traces and stacks while guessing syntax along the way, so no idea if the changes made are just bad, or if this is close to a solution or not.

another thought: This is in the beit.py file, the model for midas beit, that is 512 (the big one.) perhaps changing to a different model (swin2) will resolve? thats on my todo at least if I cannot resolve beit.