[X] I have searched the existing issues and checked the recent builds/commits of both this extension and the webui
Have you read FAQ on README?
[X] I have updated WebUI and this extension to the latest version
What happened?
Using a video source works fine in txt2img, but is broken in img2img. AnimateDiff appears not to be extracting video frames and routing them to webui. Fails in slightly different ways in both 1111 and Forge. Fails whether I provide a video source or a "video path" to a folder of pre-extracted frames.
---
*** Error completing request
*** Arguments: ('task(qu8zrme3zc6pajb)', 0, 'meat', '', [], None, None, None, None, None, None, None, 20, 'DPM++ 2M Karras', 4, 0, 1, 1, 1, 7, 1.5, 0.75, 0.0, 512, 512, 1, 0, 0, 32, 0, '', '', '', [], False, [], '', <gradio.routes.Request object at 0x7fa757c69e70>, 0, False, 1, 0.5, 4, 0, 0.5, 2, False, '', 0.8, -1, False, -1, 0, 0, 0, <scripts.animatediff_ui.AnimateDiffProcess object at 0x7fa757c6b1c0>, UiControlNetUnit(enabled=True, module='canny', model='control_v11p_sd15_canny [d14c016b]', weight=1, image=None, resize_mode='Crop and Resize', low_vram=False, processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=True, control_mode='Balanced', inpaint_crop_input_image=False, hr_option='Both', save_detected_map=True, advanced_weighting=None), UiControlNetUnit(enabled=False, module='none', model='None', weight=1, image=None, resize_mode='Crop and Resize', low_vram=False, processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', inpaint_crop_input_image=False, hr_option='Both', save_detected_map=True, advanced_weighting=None), UiControlNetUnit(enabled=False, module='none', model='None', weight=1, image=None, resize_mode='Crop and Resize', low_vram=False, processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', inpaint_crop_input_image=False, hr_option='Both', save_detected_map=True, advanced_weighting=None), '* `CFG Scale` should be 2 or lower.', True, True, '', '', True, 50, True, 1, 0, False, 4, 0.5, 'Linear', 'None', '<p style="margin-bottom:0.75em">Recommended settings: Sampling Steps: 80-100, Sampler: Euler a, Denoising strength: 0.8</p>', 128, 8, ['left', 'right', 'up', 'down'], 1, 0.05, 128, 4, 0, ['left', 'right', 'up', 'down'], False, False, 'positive', 'comma', 0, False, False, 'start', '', '<p style="margin-bottom:0.75em">Will upscale the image by the selected scale factor; use width and height sliders to set tile size</p>', 64, 0, 2, 1, '', [], 0, '', [], 0, '', [], True, False, False, False, False, False, False, 0, False, None, None, False, None, None, False, None, None, False, 50) {}
Traceback (most recent call last):
File "/home/user/ml/stable-diffusion-webui/modules/call_queue.py", line 57, in f
res = list(func(*args, **kwargs))
File "/home/user/ml/stable-diffusion-webui/modules/call_queue.py", line 36, in f
res = func(*args, **kwargs)
File "/home/user/ml/stable-diffusion-webui/modules/img2img.py", line 235, in img2img
processed = process_images(p)
File "/home/user/ml/stable-diffusion-webui/modules/processing.py", line 785, in process_images
res = process_images_inner(p)
File "/home/user/ml/stable-diffusion-webui/extensions/sd-webui-controlnet/scripts/batch_hijack.py", line 59, in processing_process_images_hijack
return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
File "/home/user/ml/stable-diffusion-webui/modules/processing.py", line 855, in process_images_inner
p.init(p.all_prompts, p.all_seeds, p.all_subseeds)
File "/home/user/ml/stable-diffusion-webui/modules/processing.py", line 1574, in init
image = images.flatten(img, opts.img2img_background_color)
File "/home/user/ml/stable-diffusion-webui/modules/images.py", line 793, in flatten
if img.mode == "RGBA":
AttributeError: 'NoneType' object has no attribute 'mode'
---
Failure in Forge (with plugin branch forge/master):
Traceback (most recent call last):
File "/home/zostrianos/ml/stable-diffusion-webui-forge/modules_forge/main_thread.py", line 37, in loop
task.work()
File "/home/zostrianos/ml/stable-diffusion-webui-forge/modules_forge/main_thread.py", line 26, in work
self.result = self.func(*self.args, **self.kwargs)
File "/home/zostrianos/ml/stable-diffusion-webui-forge/modules/img2img.py", line 236, in img2img_function
processed = process_images(p)
File "/home/zostrianos/ml/stable-diffusion-webui-forge/modules/processing.py", line 752, in process_images
res = process_images_inner(p)
File "/home/zostrianos/ml/stable-diffusion-webui-forge/modules/processing.py", line 820, in process_images_inner
p.init(p.all_prompts, p.all_seeds, p.all_subseeds)
File "/home/zostrianos/ml/stable-diffusion-webui-forge/modules/processing.py", line 1602, in init
image = images.flatten(img, opts.img2img_background_color)
File "/home/zostrianos/ml/stable-diffusion-webui-forge/modules/images.py", line 793, in flatten
if img.mode == "RGBA":
AttributeError: 'NoneType' object has no attribute 'mode'
'NoneType' object has no attribute 'mode'
*** Error completing request
*** Arguments: ('task(smjx0m5tf8mra1c)', 0, 'meat', '', [], None, None, None, None, None, None, None, 20, 'DPM++ 2M Karras', 4, 0, 1, 1, 1, 7, 1.5, 0.75, 0.0, 512, 512, 1, 0, 0, 32, 0, '', '', '', [], False, [], '', <gradio.routes.Request object at 0x7f8afffa10c0>, 0, False, 1, 0.5, 4, 0, 0.5, 2, False, '', 0.8, -1, False, -1, 0, 0, 0, <scripts.animatediff_ui.AnimateDiffProcess object at 0x7f8afff8e470>, ControlNetUnit(input_mode=<InputMode.BATCH: 'batch'>, use_preview_as_input=False, batch_image_dir='/home/zostrianos/ml/stable-diffusion-webui-forge/tmp/animatediff-frames/flowersflowers-deceived-1-2941b119', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, hr_option='Both', enabled=True, module='canny', model='control_v11p_sd15_canny [d14c016b]', weight=1, image=None, resize_mode='Crop and Resize', processor_res=512, threshold_a=0.5, threshold_b=0.5, guidance_start=0, guidance_end=1, pixel_perfect=True, control_mode='Balanced', save_detected_map=True), ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, hr_option='Both', enabled=False, module='None', model='None', weight=1, image=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), ControlNetUnit(input_mode=<InputMode.SIMPLE: 'simple'>, use_preview_as_input=False, batch_image_dir='', batch_mask_dir='', batch_input_gallery=[], batch_mask_gallery=[], generated_image=None, mask_image=None, hr_option='Both', enabled=False, module='None', model='None', weight=1, image=None, resize_mode='Crop and Resize', processor_res=-1, threshold_a=-1, threshold_b=-1, guidance_start=0, guidance_end=1, pixel_perfect=False, control_mode='Balanced', save_detected_map=True), False, 7, 1, 'Constant', 0, 'Constant', 0, 1, 'enable', 'MEAN', 'AD', 1, False, 1.01, 1.02, 0.99, 0.95, False, 0.5, 2, False, 256, 2, 0, False, False, 3, 2, 0, 0.35, True, 'bicubic', 'bicubic', False, 0, 'anisotropic', 0, 'reinhard', 100, 0, 'subtract', 0, 0, 'gaussian', 'add', 0, 100, 127, 0, 'hard_clamp', 5, 0, 'None', 'None', False, 'MultiDiffusion', 768, 768, 64, 4, False, False, False, '* `CFG Scale` should be 2 or lower.', True, True, '', '', True, 50, True, 1, 0, False, 4, 0.5, 'Linear', 'None', '<p style="margin-bottom:0.75em">Recommended settings: Sampling Steps: 80-100, Sampler: Euler a, Denoising strength: 0.8</p>', 128, 8, ['left', 'right', 'up', 'down'], 1, 0.05, 128, 4, 0, ['left', 'right', 'up', 'down'], False, False, 'positive', 'comma', 0, False, False, 'start', '', '<p style="margin-bottom:0.75em">Will upscale the image by the selected scale factor; use width and height sliders to set tile size</p>', 64, 0, 2, 1, '', [], 0, '', [], 0, '', [], True, False, False, False, False, False, False, 0, False) {}
Traceback (most recent call last):
File "/home/zostrianos/ml/stable-diffusion-webui-forge/modules/call_queue.py", line 57, in f
res = list(func(*args, **kwargs))
TypeError: 'NoneType' object is not iterable
---
Steps to reproduce the problem
Launch WebUI
Enter img2img tab
Enter prompt
Activate AnimateDiff
Upload source video
Activate ControlNet
"Generate"
Fails
What should have happened?
AnimateDiff should process video frames extracted from video file.
Documentation for ControlNet V2V states:
`
AnimateDiff Video Path. If you upload a path to frames through Video Path, it will be the source control for ALL ControlNet units that you enable without submitting a control image or a path to ControlNet panel
AnimateDiff Video Source. If you upload a video through Video Source, it will be the source control for ALL ControlNet units that you enable without submitting a control image or a path to ControlNet panel.
`
Is there an existing issue for this?
Have you read FAQ on README?
What happened?
Using a video source works fine in txt2img, but is broken in img2img. AnimateDiff appears not to be extracting video frames and routing them to webui. Fails in slightly different ways in both 1111 and Forge. Fails whether I provide a video source or a "video path" to a folder of pre-extracted frames.
Running Ubuntu 22.04 & RTX 3060 12GB. Clean installs.
Failure in 1111:
Failure in Forge (with plugin branch forge/master):
Steps to reproduce the problem
What should have happened?
AnimateDiff should process video frames extracted from video file.
Documentation for ControlNet V2V states: ` AnimateDiff Video Path. If you upload a path to frames through Video Path, it will be the source control for ALL ControlNet units that you enable without submitting a control image or a path to ControlNet panel
AnimateDiff Video Source. If you upload a video through Video Source, it will be the source control for ALL ControlNet units that you enable without submitting a control image or a path to ControlNet panel. `
https://github.com/continue-revolution/sd-webui-animatediff/blob/master/docs/features.md#controlnet-v2v
Commit where the problem happens
webui: extension:
What browsers do you use to access the UI ?
Mozilla Firefox, Google Chrome
Command Line Arguments
Console logs
Additional information
No response