/mnt/home/code/workspace/AnimateAnyone-unofficial/demo/gradio_pipeline.py:96: FutureWarning: The configuration file of this scheduler: DDIMScheduler {
"_class_name": "DDIMScheduler",
"_diffusers_version": "0.21.4",
"beta_end": 0.012,
"beta_schedule": "linear",
"beta_start": 0.00085,
"clip_sample": true,
"clip_sample_range": 1.0,
"dynamic_thresholding_ratio": 0.995,
"num_train_timesteps": 1000,
"prediction_type": "epsilon",
"rescale_betas_zero_snr": false,
"sample_max_value": 1.0,
"set_alpha_to_one": true,
"steps_offset": 1,
"thresholding": false,
"timestep_spacing": "leading",
"trained_betas": null
}
has not set the configuration clip_sample. clip_sample should be set to False in the configuration file. Please make sure to update the config accordingly as not setting clip_sample in the config might lead to incorrect results in future versions. If you have downloaded this checkpoint from the Hugging Face Hub, it would be very nice if you could open a Pull request for the scheduler/scheduler_config.json file
deprecate("clip_sample not set", "1.0.0", deprecation_message, standard_warn=False)
Initialization Done!
/mnt/home/code/workspace/AnimateAnyone-unofficial/demo/gradio_pipeline.py:479: FutureWarning: Accessing config attribute in_channels directly via 'UNet3DConditionModel' object attribute is deprecated. Please access 'in_channels' over 'UNet3DConditionModel's config object instead, e.g. 'unet.config.in_channels'.
num_channels_latents = self.unet.in_channels
Traceback (most recent call last):
File "/opt/conda/envs/animate/lib/python3.8/runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/opt/conda/envs/animate/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/mnt/home/code/workspace/AnimateAnyone-unofficial/demo/animate_cli.py", line 37, in
animate_images(args)
File "/mnt/home/code/workspace/AnimateAnyone-unofficial/demo/animate_cli.py", line 20, in animate_images
animation_path = animator(reference_image, motion_sequence, seed, steps, guidance_scale, size)
File "/mnt/home/code/workspace/AnimateAnyone-unofficial/demo/animate.py", line 112, in call
sample = self.pipeline(
File "/opt/conda/envs/animate/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, kwargs)
File "/mnt/home/code/workspace/AnimateAnyone-unofficial/demo/gradio_pipeline.py", line 512, in call
image_embeddings = clip_image_encoder(clip_ref_image).unsqueeze(1).to(device=latents.device,dtype=latents.dtype)
File "/opt/conda/envs/animate/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, *kwargs)
File "/mnt/home/code/workspace/AnimateAnyone-unofficial/models/ReferenceEncoder.py", line 23, in forward
outputs = self.model(pixel_values)
File "/opt/conda/envs/animate/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(input, kwargs)
File "/opt/conda/envs/animate/lib/python3.8/site-packages/transformers/models/clip/modeling_clip.py", line 941, in forward
return self.vision_model(
File "/opt/conda/envs/animate/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, *kwargs)
File "/opt/conda/envs/animate/lib/python3.8/site-packages/transformers/models/clip/modeling_clip.py", line 866, in forward
hidden_states = self.embeddings(pixel_values)
File "/opt/conda/envs/animate/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(input, kwargs)
File "/opt/conda/envs/animate/lib/python3.8/site-packages/transformers/models/clip/modeling_clip.py", line 195, in forward
patch_embeds = self.patch_embedding(pixel_values) # shape = [, width, grid, grid]
File "/opt/conda/envs/animate/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(input, kwargs)
File "/opt/conda/envs/animate/lib/python3.8/site-packages/torch/nn/modules/conv.py", line 463, in forward
return self._conv_forward(input, self.weight, self.bias)
File "/opt/conda/envs/animate/lib/python3.8/site-packages/torch/nn/modules/conv.py", line 459, in _conv_forward
return F.conv2d(input, weight, bias, self.stride,
RuntimeError: Input type (torch.cuda.FloatTensor) and weight type (torch.cuda.HalfTensor) should be the same
/mnt/home/code/workspace/AnimateAnyone-unofficial/demo/gradio_pipeline.py:96: FutureWarning: The configuration file of this scheduler: DDIMScheduler { "_class_name": "DDIMScheduler", "_diffusers_version": "0.21.4", "beta_end": 0.012, "beta_schedule": "linear", "beta_start": 0.00085, "clip_sample": true, "clip_sample_range": 1.0, "dynamic_thresholding_ratio": 0.995, "num_train_timesteps": 1000, "prediction_type": "epsilon", "rescale_betas_zero_snr": false, "sample_max_value": 1.0, "set_alpha_to_one": true, "steps_offset": 1, "thresholding": false, "timestep_spacing": "leading", "trained_betas": null } has not set the configuration
animate_images(args)
File "/mnt/home/code/workspace/AnimateAnyone-unofficial/demo/animate_cli.py", line 20, in animate_images
animation_path = animator(reference_image, motion_sequence, seed, steps, guidance_scale, size)
File "/mnt/home/code/workspace/AnimateAnyone-unofficial/demo/animate.py", line 112, in call
sample = self.pipeline(
File "/opt/conda/envs/animate/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, kwargs)
File "/mnt/home/code/workspace/AnimateAnyone-unofficial/demo/gradio_pipeline.py", line 512, in call
image_embeddings = clip_image_encoder(clip_ref_image).unsqueeze(1).to(device=latents.device,dtype=latents.dtype)
File "/opt/conda/envs/animate/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, *kwargs)
File "/mnt/home/code/workspace/AnimateAnyone-unofficial/models/ReferenceEncoder.py", line 23, in forward
outputs = self.model(pixel_values)
File "/opt/conda/envs/animate/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(input, kwargs)
File "/opt/conda/envs/animate/lib/python3.8/site-packages/transformers/models/clip/modeling_clip.py", line 941, in forward
return self.vision_model(
File "/opt/conda/envs/animate/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(*input, *kwargs)
File "/opt/conda/envs/animate/lib/python3.8/site-packages/transformers/models/clip/modeling_clip.py", line 866, in forward
hidden_states = self.embeddings(pixel_values)
File "/opt/conda/envs/animate/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(input, kwargs)
File "/opt/conda/envs/animate/lib/python3.8/site-packages/transformers/models/clip/modeling_clip.py", line 195, in forward
patch_embeds = self.patch_embedding(pixel_values) # shape = [, width, grid, grid]
File "/opt/conda/envs/animate/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1194, in _call_impl
return forward_call(input, kwargs)
File "/opt/conda/envs/animate/lib/python3.8/site-packages/torch/nn/modules/conv.py", line 463, in forward
return self._conv_forward(input, self.weight, self.bias)
File "/opt/conda/envs/animate/lib/python3.8/site-packages/torch/nn/modules/conv.py", line 459, in _conv_forward
return F.conv2d(input, weight, bias, self.stride,
RuntimeError: Input type (torch.cuda.FloatTensor) and weight type (torch.cuda.HalfTensor) should be the same
clip_sample
.clip_sample
should be set to False in the configuration file. Please make sure to update the config accordingly as not settingclip_sample
in the config might lead to incorrect results in future versions. If you have downloaded this checkpoint from the Hugging Face Hub, it would be very nice if you could open a Pull request for thescheduler/scheduler_config.json
file deprecate("clip_sample not set", "1.0.0", deprecation_message, standard_warn=False) Initialization Done! /mnt/home/code/workspace/AnimateAnyone-unofficial/demo/gradio_pipeline.py:479: FutureWarning: Accessing config attributein_channels
directly via 'UNet3DConditionModel' object attribute is deprecated. Please access 'in_channels' over 'UNet3DConditionModel's config object instead, e.g. 'unet.config.in_channels'. num_channels_latents = self.unet.in_channels Traceback (most recent call last): File "/opt/conda/envs/animate/lib/python3.8/runpy.py", line 194, in _run_module_as_main return _run_code(code, main_globals, None, File "/opt/conda/envs/animate/lib/python3.8/runpy.py", line 87, in _run_code exec(code, run_globals) File "/mnt/home/code/workspace/AnimateAnyone-unofficial/demo/animate_cli.py", line 37, in