Open saogalaxy opened 4 months ago
Does it work with one of the official motion modules? Try these: https://huggingface.co/conrevo/AnimateDiff-A1111/tree/main
edit: Might also try creating the JSON file described here: https://civitai.com/models/155528/improved-humans-motion-module?dialog=commentThread&commentId=267581
its giving the same error using the official motion modules
2024-06-13 08:59:42,549 - AnimateDiff - INFO - AnimateDiff process start.
2024-06-13 08:59:42,549 - AnimateDiff - INFO - Loading motion module mm_sd15_v2.safetensors from D:\stable-diffusion-webui\extensions\sd-webui-animatediff\model\mm_sd15_v2.safetensors
Calculating sha256 for D:\stable-diffusion-webui\extensions\sd-webui-animatediff\model\mm_sd15_v2.safetensors: d3976a7dcd11d4f5202d5360c6f1996fc669d6114c530afb3dca4769cc6dd2b2
2024-06-13 08:59:43,242 - AnimateDiff - INFO - Guessed mm_sd15_v2.safetensors architecture: MotionModuleType.AnimateDiffV2
2024-06-13 08:59:46,334 - AnimateDiff - INFO - Injecting motion module mm_sd15_v2.safetensors into SD1.5 UNet middle block.
2024-06-13 08:59:46,335 - AnimateDiff - INFO - Injecting motion module mm_sd15_v2.safetensors into SD1.5 UNet input blocks.
2024-06-13 08:59:46,335 - AnimateDiff - INFO - Injecting motion module mm_sd15_v2.safetensors into SD1.5 UNet output blocks.
2024-06-13 08:59:46,335 - AnimateDiff - INFO - Setting DDIM alpha.
2024-06-13 08:59:46,341 - AnimateDiff - INFO - Injection finished.
2024-06-13 08:59:46,341 - AnimateDiff - INFO - AnimateDiff + ControlNet will generate 8 frames.
0%| | 0/35 [00:00<?, ?it/s]2024-06-13 08:59:47,677 - AnimateDiff - INFO - inner model forward hooked
C:\actions-runner_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\cuda\IndexKernel.cu:92: block: [773,0,0], thread: [32,0,0] Assertion -sizes[i] <= index && index < sizes[i] && "index out of bounds"
failed.
C:\actions-runner_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\cuda\IndexKernel.cu:92: block: [773,0,0], thread: [33,0,0] Assertion -sizes[i] <= index && index < sizes[i] && "index out of bounds"
failed.
C:\actions-runner_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\cuda\IndexKernel.cu:92: block: [773,0,0], thread: [34,0,0] Assertion -sizes[i] <= index && index < sizes[i] && "index out of bounds"
failed.
C:\actions-runner_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\cuda\IndexKernel.cu:92: block: [773,0,0], thread: [35,0,0] Assertion -sizes[i] <= index && index < sizes[i] && "index out of bounds"
failed.
C:\actions-runner_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\cuda\IndexKernel.cu:92: block: [773,0,0], thread: [36,0,0] Assertion -sizes[i] <= index && index < sizes[i] && "index out of bounds"
failed.
C:\actions-runner_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\cuda\IndexKernel.cu:92: block: [773,0,0], thread: [37,0,0] Assertion -sizes[i] <= index && index < sizes[i] && "index out of bounds"
failed.
C:\actions-runner_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\cuda\IndexKernel.cu:92: block: [773,0,0], thread: [38,0,0] Assertion -sizes[i] <= index && index < sizes[i] && "index out of bounds"
failed.
C:\actions-runner_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\cuda\IndexKernel.cu:92: block: [773,0,0], thread: [39,0,0] Assertion -sizes[i] <= index && index < sizes[i] && "index out of bounds"
failed.
C:\actions-runner_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\cuda\IndexKernel.cu:92: block: [773,0,0], thread: [40,0,0] Assertion -sizes[i] <= index && index < sizes[i] && "index out of bounds"
failed.
C:\actions-runner_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\cuda\IndexKernel.cu:92: block: [773,0,0], thread: [41,0,0] Assertion -sizes[i] <= index && index < sizes[i] && "index out of bounds"
failed.
C:\actions-runner_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\cuda\IndexKernel.cu:92: block: [773,0,0], thread: [42,0,0] Assertion -sizes[i] <= index && index < sizes[i] && "index out of bounds"
failed.
C:\actions-runner_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\cuda\IndexKernel.cu:92: block: [773,0,0], thread: [43,0,0] Assertion -sizes[i] <= index && index < sizes[i] && "index out of bounds"
failed.
C:\actions-runner_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\cuda\IndexKernel.cu:92: block: [773,0,0], thread: [44,0,0] Assertion -sizes[i] <= index && index < sizes[i] && "index out of bounds"
failed.
C:\actions-runner_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\cuda\IndexKernel.cu:92: block: [773,0,0], thread: [45,0,0] Assertion -sizes[i] <= index && index < sizes[i] && "index out of bounds"
failed.
C:\actions-runner_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\cuda\IndexKernel.cu:92: block: [773,0,0], thread: [46,0,0] Assertion -sizes[i] <= index && index < sizes[i] && "index out of bounds"
failed.
C:\actions-runner_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\cuda\IndexKernel.cu:92: block: [773,0,0], thread: [47,0,0] Assertion -sizes[i] <= index && index < sizes[i] && "index out of bounds"
failed.
C:\actions-runner_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\cuda\IndexKernel.cu:92: block: [773,0,0], thread: [48,0,0] Assertion -sizes[i] <= index && index < sizes[i] && "index out of bounds"
failed.
C:\actions-runner_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\cuda\IndexKernel.cu:92: block: [773,0,0], thread: [49,0,0] Assertion -sizes[i] <= index && index < sizes[i] && "index out of bounds"
failed.
C:\actions-runner_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\cuda\IndexKernel.cu:92: block: [773,0,0], thread: [50,0,0] Assertion -sizes[i] <= index && index < sizes[i] && "index out of bounds"
failed.
C:\actions-runner_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\cuda\IndexKernel.cu:92: block: [773,0,0], thread: [51,0,0] Assertion -sizes[i] <= index && index < sizes[i] && "index out of bounds"
failed.
C:\actions-runner_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\cuda\IndexKernel.cu:92: block: [773,0,0], thread: [52,0,0] Assertion -sizes[i] <= index && index < sizes[i] && "index out of bounds"
failed.
C:\actions-runner_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\cuda\IndexKernel.cu:92: block: [773,0,0], thread: [53,0,0] Assertion -sizes[i] <= index && index < sizes[i] && "index out of bounds"
failed.
C:\actions-runner_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\cuda\IndexKernel.cu:92: block: [773,0,0], thread: [54,0,0] Assertion -sizes[i] <= index && index < sizes[i] && "index out of bounds"
failed.
C:\actions-runner_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\cuda\IndexKernel.cu:92: block: [773,0,0], thread: [55,0,0] Assertion -sizes[i] <= index && index < sizes[i] && "index out of bounds"
failed.
C:\actions-runner_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\cuda\IndexKernel.cu:92: block: [773,0,0], thread: [56,0,0] Assertion -sizes[i] <= index && index < sizes[i] && "index out of bounds"
failed.
C:\actions-runner_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\cuda\IndexKernel.cu:92: block: [773,0,0], thread: [57,0,0] Assertion -sizes[i] <= index && index < sizes[i] && "index out of bounds"
failed.
C:\actions-runner_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\cuda\IndexKernel.cu:92: block: [773,0,0], thread: [58,0,0] Assertion -sizes[i] <= index && index < sizes[i] && "index out of bounds"
failed.
C:\actions-runner_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\cuda\IndexKernel.cu:92: block: [773,0,0], thread: [59,0,0] Assertion -sizes[i] <= index && index < sizes[i] && "index out of bounds"
failed.
C:\actions-runner_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\cuda\IndexKernel.cu:92: block: [773,0,0], thread: [60,0,0] Assertion -sizes[i] <= index && index < sizes[i] && "index out of bounds"
failed.
C:\actions-runner_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\cuda\IndexKernel.cu:92: block: [773,0,0], thread: [61,0,0] Assertion -sizes[i] <= index && index < sizes[i] && "index out of bounds"
failed.
C:\actions-runner_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\cuda\IndexKernel.cu:92: block: [773,0,0], thread: [62,0,0] Assertion -sizes[i] <= index && index < sizes[i] && "index out of bounds"
failed.
C:\actions-runner_work\pytorch\pytorch\builder\windows\pytorch\aten\src\ATen\native\cuda\IndexKernel.cu:92: block: [773,0,0], thread: [63,0,0] Assertion -sizes[i] <= index && index < sizes[i] && "index out of bounds"
failed.
Exception in thread MemMon:
Traceback (most recent call last):
File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.3056.0_x64__qbz5n2kfra8p0\lib\threading.py", line 1016, in _bootstrap_inner
0%| | 0/35 [00:00<?, ?it/s]
self.run()
File "D:\stable-diffusion-webui\modules\memmon.py", line 53, in run
free, total = self.cuda_mem_get_info()
File "D:\stable-diffusion-webui\modules\memmon.py", line 34, in cuda_mem_get_info
return torch.cuda.mem_get_info(index)
File "D:\stable-diffusion-webui\venv\lib\site-packages\torch\cuda\memory.py", line 663, in mem_get_info
return torch.cuda.cudart().cudaMemGetInfo(device)
RuntimeError: CUDA error: device-side assert triggered
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with TORCH_USE_CUDA_DSA
to enable device-side assertions.
Error completing request
Arguments: ('task(f28gm25ufrnm3yz)', <gradio.routes.Request object at 0x00000245B6391870>, 'TORCH_USE_CUDA_DSA
to enable device-side assertions.
Traceback (most recent call last):
File "D:\stable-diffusion-webui\venv\lib\site-packages\gradio\routes.py", line 488, in run_predict
output = await app.get_blocks().process_api(
File "D:\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1431, in process_api
result = await self.call_function(
File "D:\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 1103, in call_function
prediction = await anyio.to_thread.run_sync(
File "D:\stable-diffusion-webui\venv\lib\site-packages\anyio\to_thread.py", line 33, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "D:\stable-diffusion-webui\venv\lib\site-packages\anyio_backends_asyncio.py", line 877, in run_sync_in_worker_thread
return await future
File "D:\stable-diffusion-webui\venv\lib\site-packages\anyio_backends_asyncio.py", line 807, in run
result = context.run(func, args)
File "D:\stable-diffusion-webui\venv\lib\site-packages\gradio\utils.py", line 707, in wrapper
response = f(args, *kwargs)
File "D:\stable-diffusion-webui\modules\call_queue.py", line 95, in f
mem_stats = {k: -(v//-(10241024)) for k, v in shared.mem_mon.stop().items()}
File "D:\stable-diffusion-webui\modules\memmon.py", line 92, in stop
return self.read()
File "D:\stable-diffusion-webui\modules\memmon.py", line 77, in read
free, total = self.cuda_mem_get_info()
File "D:\stable-diffusion-webui\modules\memmon.py", line 34, in cuda_mem_get_info
return torch.cuda.mem_get_info(index)
File "D:\stable-diffusion-webui\venv\lib\site-packages\torch\cuda\memory.py", line 663, in mem_get_info
return torch.cuda.cudart().cudaMemGetInfo(device)
RuntimeError: CUDA error: device-side assert triggered
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with TORCH_USE_CUDA_DSA
to enable device-side assertions.
tried with official models after deleting the venv and letting it rebuild now it's generating the black squares nan message
saved the not working with 1.9.4 using official file and rebuild venv by deleting and relaunching automatic 1111. getting nans black output moving over to other issues to see if its got a soluction.
Is there an existing issue for this?
Have you read FAQ on README?
What happened?
getting cuda error in auto1111 verison 1.9.4 this is a fresh install
Steps to reproduce the problem
check animated diff set frames to 8 all other settings to default and select motion mod.
What should have happened?
animated diff should run like normal before the clean os and version install.
Commit where the problem happens
webui: 1.9.4 extension: version: 1.9.4 • python: 3.10.11 • torch: 2.1.2+cu121 • xformers: 0.0.23.post1 • gradio: 3.41.2 • checkpoint: 16e6ae65a1
What browsers do you use to access the UI ?
Google Chrome
Command Line Arguments
Console logs
Additional information
No response