(venv) C:\tut\DiffSynth-Studio>python examples\Diffutoon\diffutoon_toon_shading_with_editing_signals.py
Failed to load cpm_kernels:No module named 'cpm_kernels'
Downloading models: ['AingDiffusion_v12', 'AnimateDiff_v2', 'ControlNet_v11p_sd15_lineart', 'ControlNet_v11f1e_sd15_tile', 'ControlNet_v11f1p_sd15_depth', 'ControlNet_v11p_sd15_softedge', 'TextualInversion_VeryBadImageNegative_v1.3']
aingdiffusion_v12.safetensors has been already in models/stable_diffusion.
mm_sd_v15_v2.ckpt has been already in models/AnimateDiff.
control_v11p_sd15_lineart.pth has been already in models/ControlNet.
sk_model.pth has been already in models/Annotators.
sk_model2.pth has been already in models/Annotators.
control_v11f1e_sd15_tile.pth has been already in models/ControlNet.
control_v11f1p_sd15_depth.pth has been already in models/ControlNet.
dpt_hybrid-midas-501f0c75.pt has been already in models/Annotators.
control_v11p_sd15_softedge.pth has been already in models/ControlNet.
ControlNetHED.pth has been already in models/Annotators.
verybadimagenegative_v1.3.pt has been already in models/textual_inversion.
Traceback (most recent call last):
File "C:\tut\DiffSynth-Studio\examples\Diffutoon\diffutoon_toon_shading_with_editing_signals.py", line 193, in <module>
runner.run(config_stage_1)
File "c:\tut\diffsynth-studio\diffsynth\pipelines\pipeline_runner.py", line 86, in run
config["pipeline"]["pipeline_inputs"] = self.add_data_to_pipeline_inputs(config["data"], config["pipeline"]["pipeline_inputs"])
File "c:\tut\diffsynth-studio\diffsynth\pipelines\pipeline_runner.py", line 64, in add_data_to_pipeline_inputs
pipeline_inputs["input_frames"] = self.load_video(**data["input_frames"])
File "c:\tut\diffsynth-studio\diffsynth\pipelines\pipeline_runner.py", line 54, in load_video
video = VideoData(video_file=video_file, image_folder=image_folder, height=height, width=width)
File "c:\tut\diffsynth-studio\diffsynth\data\video.py", line 85, in __init__
self.data = LowMemoryVideo(video_file, **kwargs)
File "c:\tut\diffsynth-studio\diffsynth\data\video.py", line 9, in __init__
self.reader = imageio.get_reader(file_name)
File "C:\tut\DiffSynth-Studio\venv\lib\site-packages\imageio\v2.py", line 290, in get_reader
image_file = imopen(uri, "r" + mode, **imopen_args)
File "C:\tut\DiffSynth-Studio\venv\lib\site-packages\imageio\core\imopen.py", line 113, in imopen
request = Request(uri, io_mode, format_hint=format_hint, extension=extension)
File "C:\tut\DiffSynth-Studio\venv\lib\site-packages\imageio\core\request.py", line 247, in __init__
self._parse_uri(uri)
File "C:\tut\DiffSynth-Studio\venv\lib\site-packages\imageio\core\request.py", line 407, in _parse_uri
raise FileNotFoundError("No such file: '%s'" % fn)
FileNotFoundError: No such file: 'C:\tut\DiffSynth-Studio\data\examples\diffutoon_edit\input_video.mp4'
Exception ignored in: <function LowMemoryVideo.__del__ at 0x000001A4FEFEA3B0>
Traceback (most recent call last):
File "c:\tut\diffsynth-studio\diffsynth\data\video.py", line 18, in __del__
self.reader.close()
AttributeError: 'LowMemoryVideo' object has no attribute 'reader'
python examples\Diffutoon\diffutoon_toon_shading.py works fine
(venv) C:\tut\DiffSynth-Studio>python examples\Diffutoon\diffutoon_toon_shading.py
Failed to load cpm_kernels:No module named 'cpm_kernels'
Downloading models: ['AingDiffusion_v12', 'AnimateDiff_v2', 'ControlNet_v11p_sd15_lineart', 'ControlNet_v11f1e_sd15_tile', 'TextualInversion_VeryBadImageNegative_v1.3']
aingdiffusion_v12.safetensors has been already in models/stable_diffusion.
mm_sd_v15_v2.ckpt has been already in models/AnimateDiff.
control_v11p_sd15_lineart.pth has been already in models/ControlNet.
sk_model.pth has been already in models/Annotators.
sk_model2.pth has been already in models/Annotators.
control_v11f1e_sd15_tile.pth has been already in models/ControlNet.
verybadimagenegative_v1.3.pt has been already in models/textual_inversion.
Loading models from: models/stable_diffusion/aingdiffusion_v12.safetensors
model_name: sd_text_encoder model_class: SDTextEncoder
model_name: sd_unet model_class: SDUNet
model_name: sd_vae_decoder model_class: SDVAEDecoder
model_name: sd_vae_encoder model_class: SDVAEEncoder
The following models are loaded: ['sd_text_encoder', 'sd_unet', 'sd_vae_decoder', 'sd_vae_encoder'].
Loading models from: models/AnimateDiff/mm_sd_v15_v2.ckpt
model_name: sd_motion_modules model_class: SDMotionModel
The following models are loaded: ['sd_motion_modules'].
Loading models from: models/ControlNet/control_v11f1e_sd15_tile.pth
model_name: sd_controlnet model_class: SDControlNet
The following models are loaded: ['sd_controlnet'].
Loading models from: models/ControlNet/control_v11p_sd15_lineart.pth
model_name: sd_controlnet model_class: SDControlNet
The following models are loaded: ['sd_controlnet'].
C:\tut\DiffSynth-Studio\venv\lib\site-packages\transformers\tokenization_utils_base.py:1601: FutureWarning: clean_up_tokenization_spaces was not set. It will be set to True by default. This behavior will be depracted in transformers v4.45, and will be then set to False by default. For more details check this issue: https://github.com/huggingface/transformers/issues/31884
warnings.warn(
Using sd_text_encoder from models/stable_diffusion/aingdiffusion_v12.safetensors.
Using sd_unet from models/stable_diffusion/aingdiffusion_v12.safetensors.
Using sd_vae_decoder from models/stable_diffusion/aingdiffusion_v12.safetensors.
Using sd_vae_encoder from models/stable_diffusion/aingdiffusion_v12.safetensors.
Using sd_controlnet from models/ControlNet/control_v11f1e_sd15_tile.pth.
Using sd_controlnet from models/ControlNet/control_v11p_sd15_lineart.pth.
No sd_ipadapter models available.
No sd_ipadapter_clip_image_encoder models available.
Using sd_motion_modules from models/AnimateDiff/mm_sd_v15_v2.ckpt.
c:\tut\diffsynth-studio\diffsynth\models\attention.py:54: UserWarning: 1Torch was not compiled with flash attention. (Triggered internally at ..\aten\src\ATen\native\transformers\cuda\sdp_utils.cpp:455.)
hidden_states = torch.nn.functional.scaled_dot_product_attention(q, k, v, attn_mask=attn_mask)
Textual inversion verybadimagenegative_v1.3 is enabled.
100%|█████████████████████████████████████████████████████████████████████████████████| 30/30 [00:00<00:00, 58.99it/s]
100%|█████████████████████████████████████████████████████████████████████████████████| 30/30 [00:04<00:00, 6.82it/s]
100%|█████████████████████████████████████████████████████████████████████████████████| 10/10 [06:47<00:00, 40.76s/it]
Saving images: 100%|██████████████████████████████████████████████████████████████████| 30/30 [00:13<00:00, 2.18it/s]
Saving video: 100%|███████████████████████████████████████████████████████████████████| 30/30 [00:00<00:00, 69.03it/s]
python examples\Diffutoon\diffutoon_toon_shading.py works fine
(venv) C:\tut\DiffSynth-Studio>python examples\Diffutoon\diffutoon_toon_shading.py Failed to load cpm_kernels:No module named 'cpm_kernels' Downloading models: ['AingDiffusion_v12', 'AnimateDiff_v2', 'ControlNet_v11p_sd15_lineart', 'ControlNet_v11f1e_sd15_tile', 'TextualInversion_VeryBadImageNegative_v1.3'] aingdiffusion_v12.safetensors has been already in models/stable_diffusion. mm_sd_v15_v2.ckpt has been already in models/AnimateDiff. control_v11p_sd15_lineart.pth has been already in models/ControlNet. sk_model.pth has been already in models/Annotators. sk_model2.pth has been already in models/Annotators. control_v11f1e_sd15_tile.pth has been already in models/ControlNet. verybadimagenegative_v1.3.pt has been already in models/textual_inversion. Loading models from: models/stable_diffusion/aingdiffusion_v12.safetensors model_name: sd_text_encoder model_class: SDTextEncoder model_name: sd_unet model_class: SDUNet model_name: sd_vae_decoder model_class: SDVAEDecoder model_name: sd_vae_encoder model_class: SDVAEEncoder The following models are loaded: ['sd_text_encoder', 'sd_unet', 'sd_vae_decoder', 'sd_vae_encoder']. Loading models from: models/AnimateDiff/mm_sd_v15_v2.ckpt model_name: sd_motion_modules model_class: SDMotionModel The following models are loaded: ['sd_motion_modules']. Loading models from: models/ControlNet/control_v11f1e_sd15_tile.pth model_name: sd_controlnet model_class: SDControlNet The following models are loaded: ['sd_controlnet']. Loading models from: models/ControlNet/control_v11p_sd15_lineart.pth model_name: sd_controlnet model_class: SDControlNet The following models are loaded: ['sd_controlnet']. C:\tut\DiffSynth-Studio\venv\lib\site-packages\transformers\tokenization_utils_base.py:1601: FutureWarning:
clean_up_tokenization_spaces
was not set. It will be set toTrue
by default. This behavior will be depracted in transformers v4.45, and will be then set toFalse
by default. For more details check this issue: https://github.com/huggingface/transformers/issues/31884 warnings.warn( Using sd_text_encoder from models/stable_diffusion/aingdiffusion_v12.safetensors. Using sd_unet from models/stable_diffusion/aingdiffusion_v12.safetensors. Using sd_vae_decoder from models/stable_diffusion/aingdiffusion_v12.safetensors. Using sd_vae_encoder from models/stable_diffusion/aingdiffusion_v12.safetensors. Using sd_controlnet from models/ControlNet/control_v11f1e_sd15_tile.pth. Using sd_controlnet from models/ControlNet/control_v11p_sd15_lineart.pth. No sd_ipadapter models available. No sd_ipadapter_clip_image_encoder models available. Using sd_motion_modules from models/AnimateDiff/mm_sd_v15_v2.ckpt. c:\tut\diffsynth-studio\diffsynth\models\attention.py:54: UserWarning: 1Torch was not compiled with flash attention. (Triggered internally at ..\aten\src\ATen\native\transformers\cuda\sdp_utils.cpp:455.) hidden_states = torch.nn.functional.scaled_dot_product_attention(q, k, v, attn_mask=attn_mask) Textual inversion verybadimagenegative_v1.3 is enabled. 100%|█████████████████████████████████████████████████████████████████████████████████| 30/30 [00:00<00:00, 58.99it/s] 100%|█████████████████████████████████████████████████████████████████████████████████| 30/30 [00:04<00:00, 6.82it/s] 100%|█████████████████████████████████████████████████████████████████████████████████| 10/10 [06:47<00:00, 40.76s/it] Saving images: 100%|██████████████████████████████████████████████████████████████████| 30/30 [00:13<00:00, 2.18it/s] Saving video: 100%|███████████████████████████████████████████████████████████████████| 30/30 [00:00<00:00, 69.03it/s]And here is PIP LIST