Vchitect / Latte

Latte: Latent Diffusion Transformer for Video Generation.
Apache License 2.0
1.45k stars 147 forks source link

sh sample/t2v.sh error, #18

Open ZerRui opened 4 months ago

ZerRui commented 4 months ago

sh sample/t2v.sh Using model! Traceback (most recent call last): File "/data/zhangmaolin/code/Latte/sample/sample_t2v.py", line 160, in main(OmegaConf.load(args.config)) File "/data/zhangmaolin/code/Latte/sample/sample_t2v.py", line 34, in main vae = AutoencoderKL.from_pretrained(args.pretrained_model_path, subfolder="vae", torch_dtype=torch.float16).to(device) File "/home/user/anaconda3/envs/py39/lib/python3.9/site-packages/diffusers/models/modeling_utils.py", line 812, in from_pretrained unexpected_keys = load_model_dict_into_meta( File "/home/user/anaconda3/envs/py39/lib/python3.9/site-packages/diffusers/models/modeling_utils.py", line 155, in load_model_dict_into_meta raise ValueError( ValueError: Cannot load /data/zhangmaolin/code/Lattle_file/Latte/t2v_required_models because decoder.conv_in.bias expected shape tensor(..., device='meta', size=(64,)), but got torch.Size([512]). If you want to instead overwrite randomly initialized weights, please make sure to pass both low_cpu_mem_usage=False and ignore_mismatched_sizes=True. For more information, see also: https://github.com/huggingface/diffusers/issues/1619#issuecomment-1345604389 as an example.

您好,能帮忙看下这个问题怎么解决吗,我跑t2v.sh的时候报错,期待您的回复

maxin-cn commented 4 months ago

sh sample/t2v.sh Using model! Traceback (most recent call last): File "/data/zhangmaolin/code/Latte/sample/sample_t2v.py", line 160, in main(OmegaConf.load(args.config)) File "/data/zhangmaolin/code/Latte/sample/sample_t2v.py", line 34, in main vae = AutoencoderKL.from_pretrained(args.pretrained_model_path, subfolder="vae", torch_dtype=torch.float16).to(device) File "/home/user/anaconda3/envs/py39/lib/python3.9/site-packages/diffusers/models/modeling_utils.py", line 812, in from_pretrained unexpected_keys = load_model_dict_into_meta( File "/home/user/anaconda3/envs/py39/lib/python3.9/site-packages/diffusers/models/modeling_utils.py", line 155, in load_model_dict_into_meta raise ValueError( ValueError: Cannot load /data/zhangmaolin/code/Lattle_file/Latte/t2v_required_models because decoder.conv_in.bias expected shape tensor(..., device='meta', size=(64,)), but got torch.Size([512]). If you want to instead overwrite randomly initialized weights, please make sure to pass both low_cpu_mem_usage=False and ignore_mismatched_sizes=True. For more information, see also: huggingface/diffusers#1619 (comment) as an example.

您好,能帮忙看下这个问题怎么解决吗,我跑t2v.sh的时候报错,期待您的回复

Hi, thank you for your issue. I found that some files in t2v_required_models are named incorrectly. I have modified it correctly and you can rename your downloaded file according to my modified file name.

ZerRui commented 4 months ago

class AutoencoderKL(ModelMixin, ConfigMixin, FromOriginalVAEMixin): r""" A VAE model with KL loss for encoding images into latents and decoding latent representations into images.

This model inherits from [`ModelMixin`]. Check the superclass documentation for it's generic methods implemented
for all models (such as downloading or saving).

Parameters:
    in_channels (int, *optional*, defaults to 3): Number of channels in the input image.
    out_channels (int,  *optional*, defaults to 3): Number of channels in the output.
    down_block_types (`Tuple[str]`, *optional*, defaults to `("DownEncoderBlock2D",)`):
        Tuple of downsample block types.
    up_block_types (`Tuple[str]`, *optional*, defaults to `("UpDecoderBlock2D",)`):
        Tuple of upsample block types.
    block_out_channels (`Tuple[int]`, *optional*, defaults to `(64,)`):
        Tuple of block output channels.
    act_fn (`str`, *optional*, defaults to `"silu"`): The activation function to use.
    latent_channels (`int`, *optional*, defaults to 4): Number of channels in the latent space.
    sample_size (`int`, *optional*, defaults to `32`): Sample input size.
    scaling_factor (`float`, *optional*, defaults to 0.18215):
        The component-wise standard deviation of the trained latent space computed using the first batch of the
        training set. This is used to scale the latent space to have unit variance when training the diffusion
        model. The latents are scaled with the formula `z = z * scaling_factor` before being passed to the
        diffusion model. When decoding, the latents are scaled back to the original scale with the formula: `z = 1
        / scaling_factor * z`. For more details, refer to sections 4.3.2 and D.1 of the [High-Resolution Image
        Synthesis with Latent Diffusion Models](https://arxiv.org/abs/2112.10752) paper.
    force_upcast (`bool`, *optional*, default to `True`):
        If enabled it will force the VAE to run in float32 for high image resolution pipelines, such as SD-XL. VAE
        can be fine-tuned / trained to a lower range without loosing too much precision in which case
        `force_upcast` can be set to `False` - see: https://huggingface.co/madebyollin/sdxl-vae-fp16-fix
"""

_supports_gradient_checkpointing = True

@register_to_config
def __init__(
    self,
    in_channels: int = 3,
    out_channels: int = 3,
    down_block_types: Tuple[str] = ("DownEncoderBlock2D","DownEncoderBlock2D","DownEncoderBlock2D","DownEncoderBlock2D"),
    up_block_types: Tuple[str] = ("UpDecoderBlock2D","UpDecoderBlock2D","UpDecoderBlock2D","UpDecoderBlock2D"),
    block_out_channels: Tuple[int] = (128,256,512,512),
    layers_per_block: int = 2,
    act_fn: str = "silu",
    latent_channels: int = 4,
    norm_num_groups: int = 32,
    sample_size: int = 32,
    scaling_factor: float = 0.18215,
    force_upcast: float = True,
):

手动按照config.json里面的参数,在代码里面改了一下配置文件可以了,感觉是代码没有自动加载config.json,另外安装了imageio==2.20.0和imageio-ffmpeg