zjy526223908 / TIP-Editor

About Official code for TIP-Editor: An Accurate 3D Editor Following Both Text-Prompts And Image-Prompts (Siggraph 2024 & TOG)
https://zjy526223908.github.io/TIP-Editor/
98 stars 5 forks source link

AttributeError: 'Linear' object has no attribute 'set_lora_layer' #11

Closed Pine-sha closed 1 week ago

Pine-sha commented 2 weeks ago

# 2. novel content personalization export MODEL_NAME="./res_gaussion/colmap_doll/scene_personalization/checkpoint-1000" export OUTPUT_DIR="./res_gaussion/colmap_doll/content_personalization" export image_root=./res_gaussion/colmap_doll/sample_views/rgb python personalization/content_personalization.py \ --pretrained_model_name_or_path $MODEL_NAME \ --enable_xformers_memory_efficient_attention \ --instance_data_dir $image_root \ --instance_data_dir "./data/object/sunglasses1" \ --class_data_dir './res_gaussion/colmap_doll/class_samples' \ --instance_prompt 'a photo of a plush toy' \ --instance_prompt 'a photo of a sunglasses' \ --class_prompt 'a photo of a plush toy' \ --validation_prompt "a photo of a plush toy wearing a sunglasses" \ --output_dir $OUTPUT_DIR \ --scene_frequency 200 \ --validation_images $image_root/1.375-30.png \ $image_root/1.3_75_0.png \ $image_root/1.3_75_30.png \ --max_train_steps=500

You are using a model of type clip_text_model to instantiate a model of type . This is not supported for all configurations of models and can yield errors. {'variance_type'} was not found in config. Values will be initialized to default values. Some weights of the model checkpoint were not used when initializing UNet2DConditionModel: ['class_embedding.module.linear_1.bias, class_embedding.module.linear_1.weight, class_embedding.module.linear_2.bias, class_embedding.module.linear_2.weight'] -- unet: xFormers memory efficient attention is enabled. -- unet attn_processor_name = down_blocks.0.attentions.0.transformer_blocks.0.attn1.processor

Traceback (most recent call last): File "personalization/content_personalization.py", line 1594, in main(args) File "personalization/content_personalization.py", line 1064, in main attn_module.to_q.set_lora_layer( File "/home/sha/miniforge3/envs/TIP-E/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1695, in getattr raise AttributeError(f"'{type(self).name}' object has no attribute '{name}'") AttributeError: 'Linear' object has no attribute 'set_lora_layer'

    unet_lora_parameters = []
    for attn_processor_name, attn_processor in unet.attn_processors.items():
        print("-- unet attn_processor_name = ", attn_processor_name)
        # -- unet attn_processor_name =  down_blocks.0.attentions.0.transformer_blocks.0.attn1.processor

        # Parse the attention module.
        attn_module = unet
        for n in attn_processor_name.split(".")[:-1]:
            attn_module = getattr(attn_module, n)
        # print("-- attn_module = ", attn_module)
            # -- attn_module =  Attention(
            # (to_q): Linear(in_features=320, out_features=320, bias=False)
            # (to_k): Linear(in_features=320, out_features=320, bias=False)
            # (to_v): Linear(in_features=320, out_features=320, bias=False)
            # (to_out): ModuleList(
            #     (0): Linear(in_features=320, out_features=320, bias=True)
            #     (1): Dropout(p=0.0, inplace=False)
            # )
            # )
        # Set the `lora_layer` attribute of the attention-related matrices.
        attn_module.to_q.set_lora_layer(
            LoRALinearLayer(
                in_features=attn_module.to_q.in_features, out_features=attn_module.to_q.out_features, rank=args.rank
            )
        )
        attn_module.to_k.set_lora_layer(
            LoRALinearLayer(
                in_features=attn_module.to_k.in_features, out_features=attn_module.to_k.out_features, rank=args.rank
            )
        )
        attn_module.to_v.set_lora_layer(
            LoRALinearLayer(
                in_features=attn_module.to_v.in_features, out_features=attn_module.to_v.out_features, rank=args.rank
            )
        )
        attn_module.to_out[0].set_lora_layer(
            LoRALinearLayer(
                in_features=attn_module.to_out[0].in_features,
                out_features=attn_module.to_out[0].out_features,
                rank=args.rank,
            )
        )

        # Accumulate the LoRA params to optimize.
        unet_lora_parameters.extend(attn_module.to_q.lora_layer.parameters())
        unet_lora_parameters.extend(attn_module.to_k.lora_layer.parameters())
        unet_lora_parameters.extend(attn_module.to_v.lora_layer.parameters())
        unet_lora_parameters.extend(attn_module.to_out[0].lora_layer.parameters())
Pine-sha commented 1 week ago

i fix the bug using pip uninstall peft

beacuse the code from /root/.conda/envs/xxx/lib/python3.8/site-packages/diffusers/models/transformer_2d.py:

        conv_cls = nn.Conv2d if USE_PEFT_BACKEND else LoRACompatibleConv
        linear_cls = nn.Linear if USE_PEFT_BACKEND else LoRACompatibleLinear