showlab / Tune-A-Video

[ICCV 2023] Tune-A-Video: One-Shot Tuning of Image Diffusion Models for Text-to-Video Generation
https://tuneavideo.github.io
Apache License 2.0
4.15k stars 377 forks source link

AttributeError: 'AttentionBlock' object has no attribute 'to_q' #82

Open userkenny opened 10 months ago

userkenny commented 10 months ago

I followed your suggested site to train a self dreambooth model by 512x512 with --train_text_encoder optioned and the dreambooth trained results is ok to inference a 512x512 picture according to the inference key word. I used a 16G 3080GPUx1 to train it.

However while I tried to lead the self made model into tune a video with "pretrained_model_path" changed to my self model location, I got the error code as below. it looks tehre is a AttributeError: 'AttentionBlock' object has no attribute 'to_q' dominated the error in torch. would you pls look into and any advises. thx.

accelerate launch train_tuneavideo.py --config=$CONFIG_NAME /home/cc/miniconda3/envs/tune_a_video/lib/python3.9/site-packages/bitsandbytes/cextension.py:127: UserWarning: The installed version of bitsandbytes was compiled without GPU support. 8-bit optimizers and GPU quantization are unavailable. warn("The installed version of bitsandbytes was compiled without GPU support. " /home/cc/miniconda3/envs/tune_a_video/lib/python3.9/site-packages/bitsandbytes/cextension.py:127: UserWarning: The installed version of bitsandbytes was compiled without GPU support. 8-bit optimizers and GPU quantization are unavailable. warn("The installed version of bitsandbytes was compiled without GPU support. "

args = Namespace(config='configs/man-skiing.yaml') 08/21/2023 09:31:06 - INFO - main - Distributed environment: NO Num processes: 1 Process index: 0 Local process index: 0 Device: cuda

Mixed precision type: fp16

The config attributes {'clip_sample_range': 1.0, 'timestep_spacing': 'leading'} were passed to DDPMScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file. {'variance_type'} was not found in config. Values will be initialized to default values. The config attributes {'force_upcast': True, 'scaling_factor': 0.18215} were passed to AutoencoderKL, but are not expected and will be ignored. Please verify your config.json configuration file. Traceback (most recent call last): File "/home/cc/Tune-A-Video/train_tuneavideo.py", line 368, in main(**OmegaConf.load(args.config)) File "/home/cc/Tune-A-Video/train_tuneavideo.py", line 108, in main vae = AutoencoderKL.from_pretrained(pretrained_model_path, subfolder="vae") File "/home/cc/miniconda3/envs/tune_a_video/lib/python3.9/site-packages/diffusers/modeling_utils.py", line 491, in from_pretrained set_module_tensor_to_device(model, param_name, param_device, value=param) File "/home/cc/miniconda3/envs/tune_a_video/lib/python3.9/site-packages/accelerate/utils/modeling.py", line 255, in set_module_tensor_to_device new_module = getattr(module, split) File "/home/cc/miniconda3/envs/tune_a_video/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1614, in getattr raise AttributeError("'{}' object has no attribute '{}'".format( AttributeError: 'AttentionBlock' object has no attribute 'to_q' Traceback (most recent call last): File "/home/cc/miniconda3/envs/tune_a_video/bin/accelerate", line 8, in sys.exit(main()) File "/home/cc/miniconda3/envs/tune_a_video/lib/python3.9/site-packages/accelerate/commands/accelerate_cli.py", line 45, in main args.func(args) File "/home/cc/miniconda3/envs/tune_a_video/lib/python3.9/site-packages/accelerate/commands/launch.py", line 979, in launch_command simple_launcher(args) File "/home/cc/miniconda3/envs/tune_a_video/lib/python3.9/site-packages/accelerate/commands/launch.py", line 628, in simple_launcher raise subprocess.CalledProcessError(returncode=process.returncode, cmd=cmd) subprocess.CalledProcessError: Command '['/home/cc/miniconda3/envs/tune_a_video/bin/python', 'train_tuneavideo.py', '--config=configs/man-skiing.yaml']' returned non-zero exit status 1.

tangdong1994 commented 9 months ago

use single gpu to train could useful