Closed throttlekitty closed 1 year ago
Hmm, looks like an external webui/torch2 issue, because I was running it on torch 1.12 and the latest webui commit
If you have enough vram, try uncommenting the lowvram-related lines in the main file
I jumped back to a fresh vanilla venv and still get the same error. I am also on the latest webui commit. Commenting the lowvram lines gives me this:
Arguments: ('test', '', 20, 24, 7, 256, 256, 0.0, False) {} Traceback (most recent call last): File "C:\stable-diffusion\a1-sd-webui\modules\call_queue.py", line 56, in f res = list(func(*args, *kwargs)) File "C:\stable-diffusion\a1-sd-webui\modules\call_queue.py", line 37, in f res = func(args, **kwargs) File "C:\stable-diffusion\a1-sd-webui\extensions\sd-webui-modelscope-text2video\scripts\modelscope-text2vid.py", line 54, in process return outdir_current + os.path.sep + f"vid.mp4" UnboundLocalError: local variable 'outdir_current' referenced before assignment
Traceback (most recent call last): File "C:\stable-diffusion\a1-sd-webui\venv\lib\site-packages\gradio\routes.py", line 337, in run_predict output = await app.get_blocks().process_api( File "C:\stable-diffusion\a1-sd-webui\venv\lib\site-packages\gradio\blocks.py", line 1018, in process_api data = self.postprocess_data(fn_index, result["prediction"], state) File "C:\stable-diffusion\a1-sd-webui\venv\lib\site-packages\gradio\blocks.py", line 956, in postprocess_data prediction_value = block.postprocess(prediction_value) File "C:\stable-diffusion\a1-sd-webui\venv\lib\site-packages\gradio\components.py", line 1860, in postprocess returned_format = y.split(".")[-1].lower() AttributeError: 'tuple' object has no attribute 'split'
UnboundLocalError: local variable 'outdir_current' referenced before assignment
Oh, that's another error now 👀
I need to get some hours of sleep after releasing this, so wait for a tiny bit
Haha sleep well! Doing some testing over here, trying uncommenting lines, downgraded to 1.13.1 in the venv (as I had that working with the main code separate from automatic1111). Lemme know if there's anything further you need tested for this issue!
I've actually been experiencing the exact same issue, and I'm on pytorch 1.13.1. Look forward to the solutions haha
FFMPEG Video (sorry, no audio) stitching done in 0.05 seconds! t2v complete, result saved at C:\StableDiffusion\automatic1111 new\outputs/img2img-images\text2video-modelscope\20230319220306 Traceback (most recent call last): File "C:\StableDiffusion\automatic1111 new\venv\lib\site-packages\gradio\routes.py", line 321, in run_predict output = await app.blocks.process_api( File "C:\StableDiffusion\automatic1111 new\venv\lib\site-packages\gradio\blocks.py", line 1016, in process_api data = self.postprocess_data(fn_index, result["prediction"], state) File "C:\StableDiffusion\automatic1111 new\venv\lib\site-packages\gradio\blocks.py", line 962, in postprocess_data prediction_value = block.postprocess(prediction_value) File "C:\StableDiffusion\automatic1111 new\venv\lib\site-packages\gradio\components.py", line 1782, in postprocess returned_format = y.split(".")[-1].lower() AttributeError: 'tuple' object has no attribute 'split'
This is what I get after commenting out (in modelscope-text2vid.py)
and removing lowvram from from modules import devices, sd_hijack
Edit: I should mention the frames still generate in the output folder, but it just doesn't stitch together a video or show it in the ui And maybe it's my prompting but every set of frames is fairly low quality, like a lot worse than what I've seen as examples (gray background fuzzy oil painting of a person moving) maybe it's the prompting? "man in a business suit walking in new york city"
Traceback (most recent call last): File "H:\NovelAI\stable-diffusion-webui\modules\call_queue.py", line 56, in f res = list(func(*args, *kwargs)) File "H:\NovelAI\stable-diffusion-webui\modules\call_queue.py", line 37, in f res = func(args, **kwargs) File "H:\NovelAI\stable-diffusion-webui\extensions\sd-webui-modelscope-text2video\scripts\modelscope-text2vid.py", line 52, in process lowvram.setup_for_low_vram(sd_model, cmd_opts.medvram) File "H:\NovelAI\stable-diffusion-webui\modules\lowvram.py", line 42, in setup_for_low_vram first_stage_model = sd_model.first_stage_model AttributeError: 'NoneType' object has no attribute 'first_stage_model'
Traceback (most recent call last): File "H:\NovelAI\stable-diffusion-webui\venv\lib\site-packages\gradio\routes.py", line 337, in run_predict output = await app.get_blocks().process_api( File "H:\NovelAI\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1018, in process_api data = self.postprocess_data(fn_index, result["prediction"], state) File "H:\NovelAI\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 956, in postprocess_data prediction_value = block.postprocess(prediction_value) File "H:\NovelAI\stable-diffusion-webui\venv\lib\site-packages\gradio\components.py", line 1860, in postprocess returned_format = y.split(".")[-1].lower() AttributeError: 'tuple' object has no attribute 'split'
I got this as well but I didn't make any changes, it just appeared after the first issue. also the code references a folder where it looks like the model dumps the images it generated before actually stitching them together for a video. stable-diffusion-webui\outputs\img2img-images\text2video-modelscope but going there the first time will crash explorer, but then the frames will be in the folder
Same here
Traceback (most recent call last):
File "F:\UI\modules\call_queue.py", line 56, in f
res = list(func(*args, **kwargs))
File "F:\UI\modules\call_queue.py", line 37, in f
res = func(*args, **kwargs)
File "F:\UI\extensions\sd-webui-modelscope-text2video\scripts\modelscope-text2vid.py", line 52, in process
lowvram.setup_for_low_vram(sd_model, cmd_opts.medvram)
File "F:\UI\modules\lowvram.py", line 42, in setup_for_low_vram
first_stage_model = sd_model.first_stage_model
AttributeError: 'NoneType' object has no attribute 'first_stage_model'
Traceback (most recent call last):
File "F:\UI\venv\lib\site-packages\gradio\routes.py", line 337, in run_predict
output = await app.get_blocks().process_api(
File "F:\UI\venv\lib\site-packages\gradio\blocks.py", line 1018, in process_api
data = self.postprocess_data(fn_index, result["prediction"], state)
File "F:\UI\venv\lib\site-packages\gradio\blocks.py", line 956, in postprocess_data
prediction_value = block.postprocess(prediction_value)
File "F:\UI\venv\lib\site-packages\gradio\components.py", line 1860, in postprocess
returned_format = y.split(".")[-1].lower()
AttributeError: 'tuple' object has no attribute 'split'
@throttlekitty @saphtea @matteo101man @Cubey42 @Devalinor most of those issues should be fixed now with https://github.com/deforum-art/sd-webui-modelscope-text2video/pull/9. Please, update your extension to the latest version and check it.
And you won't have that cryptic tuple
issue anymore 🙃. This vid will be shown instead
It works, thanks! Just letting you know this runs on torch2 as well.
I get this on pressing generate, and I'm not certain if it's because i'm currently running torch2.
Arguments: ('test1', '', 20, 24, 7, 256, 256, 0.0, False) {} Traceback (most recent call last): File "C:\stable-diffusion\a1-sd-webui\modules\call_queue.py", line 56, in f res = list(func(*args, *kwargs)) File "C:\stable-diffusion\a1-sd-webui\modules\call_queue.py", line 37, in f res = func(args, **kwargs) File "C:\stable-diffusion\a1-sd-webui\extensions\sd-webui-modelscope-text2video\scripts\modelscope-text2vid.py", line 52, in process lowvram.setup_for_low_vram(sd_model, cmd_opts.medvram) File "C:\stable-diffusion\a1-sd-webui\modules\lowvram.py", line 42, in setup_for_low_vram first_stage_model = sd_model.first_stage_model AttributeError: 'NoneType' object has no attribute 'first_stage_model'
Traceback (most recent call last): File "C:\stable-diffusion\a1-sd-webui\venv\lib\site-packages\gradio\routes.py", line 337, in run_predict output = await app.get_blocks().process_api( File "C:\stable-diffusion\a1-sd-webui\venv\lib\site-packages\gradio\blocks.py", line 1018, in process_api data = self.postprocess_data(fn_index, result["prediction"], state) File "C:\stable-diffusion\a1-sd-webui\venv\lib\site-packages\gradio\blocks.py", line 956, in postprocess_data prediction_value = block.postprocess(prediction_value) File "C:\stable-diffusion\a1-sd-webui\venv\lib\site-packages\gradio\components.py", line 1860, in postprocess returned_format = y.split(".")[-1].lower() AttributeError: 'tuple' object has no attribute 'split'