modelscope / DiffSynth-Studio

Enjoy the magic of Diffusion models!
Apache License 2.0
6.58k stars 600 forks source link

AttributeError: 'LowMemoryVideo' object has no attribute 'reader' #168

Closed nitinmukesh closed 2 months ago

nitinmukesh commented 2 months ago
(venv) C:\tut\DiffSynth-Studio>python examples\Diffutoon\diffutoon_toon_shading_with_editing_signals.py
Failed to load cpm_kernels:No module named 'cpm_kernels'
Downloading models: ['AingDiffusion_v12', 'AnimateDiff_v2', 'ControlNet_v11p_sd15_lineart', 'ControlNet_v11f1e_sd15_tile', 'ControlNet_v11f1p_sd15_depth', 'ControlNet_v11p_sd15_softedge', 'TextualInversion_VeryBadImageNegative_v1.3']
    aingdiffusion_v12.safetensors has been already in models/stable_diffusion.
    mm_sd_v15_v2.ckpt has been already in models/AnimateDiff.
    control_v11p_sd15_lineart.pth has been already in models/ControlNet.
    sk_model.pth has been already in models/Annotators.
    sk_model2.pth has been already in models/Annotators.
    control_v11f1e_sd15_tile.pth has been already in models/ControlNet.
    control_v11f1p_sd15_depth.pth has been already in models/ControlNet.
    dpt_hybrid-midas-501f0c75.pt has been already in models/Annotators.
    control_v11p_sd15_softedge.pth has been already in models/ControlNet.
    ControlNetHED.pth has been already in models/Annotators.
    verybadimagenegative_v1.3.pt has been already in models/textual_inversion.
Traceback (most recent call last):
  File "C:\tut\DiffSynth-Studio\examples\Diffutoon\diffutoon_toon_shading_with_editing_signals.py", line 193, in <module>
    runner.run(config_stage_1)
  File "c:\tut\diffsynth-studio\diffsynth\pipelines\pipeline_runner.py", line 86, in run
    config["pipeline"]["pipeline_inputs"] = self.add_data_to_pipeline_inputs(config["data"], config["pipeline"]["pipeline_inputs"])
  File "c:\tut\diffsynth-studio\diffsynth\pipelines\pipeline_runner.py", line 64, in add_data_to_pipeline_inputs
    pipeline_inputs["input_frames"] = self.load_video(**data["input_frames"])
  File "c:\tut\diffsynth-studio\diffsynth\pipelines\pipeline_runner.py", line 54, in load_video
    video = VideoData(video_file=video_file, image_folder=image_folder, height=height, width=width)
  File "c:\tut\diffsynth-studio\diffsynth\data\video.py", line 85, in __init__
    self.data = LowMemoryVideo(video_file, **kwargs)
  File "c:\tut\diffsynth-studio\diffsynth\data\video.py", line 9, in __init__
    self.reader = imageio.get_reader(file_name)
  File "C:\tut\DiffSynth-Studio\venv\lib\site-packages\imageio\v2.py", line 290, in get_reader
    image_file = imopen(uri, "r" + mode, **imopen_args)
  File "C:\tut\DiffSynth-Studio\venv\lib\site-packages\imageio\core\imopen.py", line 113, in imopen
    request = Request(uri, io_mode, format_hint=format_hint, extension=extension)
  File "C:\tut\DiffSynth-Studio\venv\lib\site-packages\imageio\core\request.py", line 247, in __init__
    self._parse_uri(uri)
  File "C:\tut\DiffSynth-Studio\venv\lib\site-packages\imageio\core\request.py", line 407, in _parse_uri
    raise FileNotFoundError("No such file: '%s'" % fn)
FileNotFoundError: No such file: 'C:\tut\DiffSynth-Studio\data\examples\diffutoon_edit\input_video.mp4'
Exception ignored in: <function LowMemoryVideo.__del__ at 0x000001A4FEFEA3B0>
Traceback (most recent call last):
  File "c:\tut\diffsynth-studio\diffsynth\data\video.py", line 18, in __del__
    self.reader.close()
AttributeError: 'LowMemoryVideo' object has no attribute 'reader'

python examples\Diffutoon\diffutoon_toon_shading.py works fine

(venv) C:\tut\DiffSynth-Studio>python examples\Diffutoon\diffutoon_toon_shading.py Failed to load cpm_kernels:No module named 'cpm_kernels' Downloading models: ['AingDiffusion_v12', 'AnimateDiff_v2', 'ControlNet_v11p_sd15_lineart', 'ControlNet_v11f1e_sd15_tile', 'TextualInversion_VeryBadImageNegative_v1.3'] aingdiffusion_v12.safetensors has been already in models/stable_diffusion. mm_sd_v15_v2.ckpt has been already in models/AnimateDiff. control_v11p_sd15_lineart.pth has been already in models/ControlNet. sk_model.pth has been already in models/Annotators. sk_model2.pth has been already in models/Annotators. control_v11f1e_sd15_tile.pth has been already in models/ControlNet. verybadimagenegative_v1.3.pt has been already in models/textual_inversion. Loading models from: models/stable_diffusion/aingdiffusion_v12.safetensors model_name: sd_text_encoder model_class: SDTextEncoder model_name: sd_unet model_class: SDUNet model_name: sd_vae_decoder model_class: SDVAEDecoder model_name: sd_vae_encoder model_class: SDVAEEncoder The following models are loaded: ['sd_text_encoder', 'sd_unet', 'sd_vae_decoder', 'sd_vae_encoder']. Loading models from: models/AnimateDiff/mm_sd_v15_v2.ckpt model_name: sd_motion_modules model_class: SDMotionModel The following models are loaded: ['sd_motion_modules']. Loading models from: models/ControlNet/control_v11f1e_sd15_tile.pth model_name: sd_controlnet model_class: SDControlNet The following models are loaded: ['sd_controlnet']. Loading models from: models/ControlNet/control_v11p_sd15_lineart.pth model_name: sd_controlnet model_class: SDControlNet The following models are loaded: ['sd_controlnet']. C:\tut\DiffSynth-Studio\venv\lib\site-packages\transformers\tokenization_utils_base.py:1601: FutureWarning: clean_up_tokenization_spaces was not set. It will be set to True by default. This behavior will be depracted in transformers v4.45, and will be then set to False by default. For more details check this issue: https://github.com/huggingface/transformers/issues/31884 warnings.warn( Using sd_text_encoder from models/stable_diffusion/aingdiffusion_v12.safetensors. Using sd_unet from models/stable_diffusion/aingdiffusion_v12.safetensors. Using sd_vae_decoder from models/stable_diffusion/aingdiffusion_v12.safetensors. Using sd_vae_encoder from models/stable_diffusion/aingdiffusion_v12.safetensors. Using sd_controlnet from models/ControlNet/control_v11f1e_sd15_tile.pth. Using sd_controlnet from models/ControlNet/control_v11p_sd15_lineart.pth. No sd_ipadapter models available. No sd_ipadapter_clip_image_encoder models available. Using sd_motion_modules from models/AnimateDiff/mm_sd_v15_v2.ckpt. c:\tut\diffsynth-studio\diffsynth\models\attention.py:54: UserWarning: 1Torch was not compiled with flash attention. (Triggered internally at ..\aten\src\ATen\native\transformers\cuda\sdp_utils.cpp:455.) hidden_states = torch.nn.functional.scaled_dot_product_attention(q, k, v, attn_mask=attn_mask) Textual inversion verybadimagenegative_v1.3 is enabled. 100%|█████████████████████████████████████████████████████████████████████████████████| 30/30 [00:00<00:00, 58.99it/s] 100%|█████████████████████████████████████████████████████████████████████████████████| 30/30 [00:04<00:00, 6.82it/s] 100%|█████████████████████████████████████████████████████████████████████████████████| 10/10 [06:47<00:00, 40.76s/it] Saving images: 100%|██████████████████████████████████████████████████████████████████| 30/30 [00:13<00:00, 2.18it/s] Saving video: 100%|███████████████████████████████████████████████████████████████████| 30/30 [00:00<00:00, 69.03it/s]

And here is PIP LIST

(venv) C:\tut\DiffSynth-Studio>pip list Package Version Editable project location ------------------------- --------------- ------------------------- absl-py 2.1.0 addict 2.4.0 altair 5.4.0 attrs 24.2.0 basicsr 1.4.2 blinker 1.8.2 cachetools 5.5.0 certifi 2024.7.4 charset-normalizer 3.3.2 click 8.1.7 colorama 0.4.6 controlnet-aux 0.0.7 cupy-cuda12x 13.2.0 diffsynth 1.0.0 c:\tut\diffsynth-studio einops 0.8.0 fastrlock 0.8.2 filelock 3.15.4 fsspec 2024.6.1 future 1.0.0 gitdb 4.0.11 GitPython 3.1.43 grpcio 1.65.5 huggingface-hub 0.24.6 idna 3.7 imageio 2.35.1 imageio-ffmpeg 0.5.1 importlib_metadata 8.3.0 intel-openmp 2021.4.0 Jinja2 3.1.4 jsonschema 4.23.0 jsonschema-specifications 2023.12.1 lazy_loader 0.4 lmdb 1.5.1 Markdown 3.7 markdown-it-py 3.0.0 MarkupSafe 2.1.5 mdurl 0.1.2 mkl 2021.4.0 modelscope 1.17.1 mpmath 1.3.0 narwhals 1.5.0 networkx 3.3 numpy 1.25.1 opencv-python 4.10.0.84 packaging 24.1 pandas 2.2.2 pillow 10.4.0 pip 22.2.1 platformdirs 4.2.2 protobuf 5.27.3 psutil 6.0.0 pyarrow 17.0.0 pydeck 0.9.1 Pygments 2.18.0 python-dateutil 2.9.0.post0 pytz 2024.1 PyYAML 6.0.2 referencing 0.35.1 regex 2024.7.24 requests 2.32.3 rich 13.7.1 rpds-py 0.20.0 safetensors 0.4.4 scikit-image 0.24.0 scipy 1.14.0 sentencepiece 0.2.0 setuptools 63.2.0 six 1.16.0 smmap 5.0.1 streamlit 1.37.1 streamlit-drawable-canvas 0.9.3 sympy 1.13.2 tb-nightly 2.18.0a20240820 tbb 2021.13.1 tenacity 8.5.0 tensorboard-data-server 0.7.2 tifffile 2024.8.10 timm 1.0.8 tokenizers 0.19.1 toml 0.10.2 tomli 2.0.1 torch 2.3.1+cu121 torchvision 0.18.1+cu121 tornado 6.4.1 tqdm 4.66.5 transformers 4.44.0 typing_extensions 4.12.2 tzdata 2024.1 urllib3 2.2.2 watchdog 4.0.2 Werkzeug 3.0.3 yapf 0.40.2 zipp 3.20.0 [notice] A new release of pip available: 22.2.1 -> 24.2 [notice] To update, run: python.exe -m pip install --upgrade pip
nitinmukesh commented 2 months ago

It was due to missing input file