Open enzyme69 opened 1 year ago
Yes @enzyme69, it should work with MPS on MacOS.
When you open up the app, does it say mps
for the device in the "Optimizations" section?
It does say MPS, however, I kept getting error when running it. What could be the issue? I followed every steps exactly.
Could you share the entire error log you get when generating?
Got different error this is when I use 256 x 256
loc("varianceEps"("(mpsFileLoc): /AppleInternal/Library/BuildRoots/97f6331a-ba75-11ed-a4bc-863efbbaf80d/Library/Caches/com.apple.xbs/Sources/MetalPerformanceShadersGraph/mpsgraph/MetalPerformanceShadersGraph/Core/Files/MPSGraphUtilities.mm":228:0)): error: input types 'tensor<1x77x1xf16>' and 'tensor<1xf32>' are not broadcast compatible
LLVM ERROR: Failed to infer result type(s).
zsh: abort streamlit run app.py
This is when I really tried to get 512 x 512, but so hard with slider only:
File "/Users/jimmygunawan/miniconda3/envs/t2v/lib/python3.10/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 565, in _run_script
exec(code, module.__dict__)
File "/Users/jimmygunawan/text2video/ez-text2video/app.py", line 124, in <module>
main()
File "/Users/jimmygunawan/text2video/ez-text2video/app.py", line 102, in main
raw_video = generate(
File "/Users/jimmygunawan/text2video/ez-text2video/lib/generate.py", line 63, in generate
pipeline = make_pipeline_generator(
File "/Users/jimmygunawan/miniconda3/envs/t2v/lib/python3.10/site-packages/streamlit/runtime/caching/cache_utils.py", line 194, in wrapper
return cached_func(*args, **kwargs)
File "/Users/jimmygunawan/miniconda3/envs/t2v/lib/python3.10/site-packages/streamlit/runtime/caching/cache_utils.py", line 223, in __call__
return self._get_or_create_cached_value(args, kwargs)
File "/Users/jimmygunawan/miniconda3/envs/t2v/lib/python3.10/site-packages/streamlit/runtime/caching/cache_utils.py", line 248, in _get_or_create_cached_value
return self._handle_cache_miss(cache, value_key, func_args, func_kwargs)
File "/Users/jimmygunawan/miniconda3/envs/t2v/lib/python3.10/site-packages/streamlit/runtime/caching/cache_utils.py", line 302, in _handle_cache_miss
computed_value = self._info.func(*func_args, **func_kwargs)
File "/Users/jimmygunawan/text2video/ez-text2video/lib/generate.py", line 45, in make_pipeline_generator
pipeline.enable_sequential_cpu_offload()
File "/Users/jimmygunawan/miniconda3/envs/t2v/lib/python3.10/site-packages/diffusers/pipelines/text_to_video_synthesis/pipeline_text_to_video_synth.py", line 171, in enable_sequential_cpu_offload
cpu_offload(cpu_offloaded_model, device)
File "/Users/jimmygunawan/miniconda3/envs/t2v/lib/python3.10/site-packages/accelerate/big_modeling.py", line 182, in cpu_offload
attach_align_device_hook(
File "/Users/jimmygunawan/miniconda3/envs/t2v/lib/python3.10/site-packages/accelerate/hooks.py", line 394, in attach_align_device_hook
attach_align_device_hook(
File "/Users/jimmygunawan/miniconda3/envs/t2v/lib/python3.10/site-packages/accelerate/hooks.py", line 394, in attach_align_device_hook
attach_align_device_hook(
File "/Users/jimmygunawan/miniconda3/envs/t2v/lib/python3.10/site-packages/accelerate/hooks.py", line 385, in attach_align_device_hook
add_hook_to_module(module, hook, append=True)
File "/Users/jimmygunawan/miniconda3/envs/t2v/lib/python3.10/site-packages/accelerate/hooks.py", line 155, in add_hook_to_module
module = hook.init_hook(module)
File "/Users/jimmygunawan/miniconda3/envs/t2v/lib/python3.10/site-packages/accelerate/hooks.py", line 270, in init_hook
set_module_tensor_to_device(module, name, self.execution_device)
File "/Users/jimmygunawan/miniconda3/envs/t2v/lib/python3.10/site-packages/accelerate/utils/modeling.py", line 147, in set_module_tensor_to_device
new_value = old_value.to(device)
File "/Users/jimmygunawan/miniconda3/envs/t2v/lib/python3.10/site-packages/torch/cuda/__init__.py", line 239, in _lazy_init
raise AssertionError("Torch not compiled with CUDA enabled")
It seems like it's a bug in Pytorch. They mention here that it's been fixed on the development version of Pytorch.
You can try changing to Pytorch-nightly in the t2v
conda environment:
conda uninstall pytorch
conda install pytorch -c pytorch-nightly
Following your instruction, with "pytorch-nightly" now I am getting new error:
0%| | 0/50 [00:04<?, ?it/s]
2023-04-03 07:26:16.445 Uncaught app exception
Traceback (most recent call last):
File "/Users/jimmygunawan/miniconda3/envs/t2v/lib/python3.10/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 565, in _run_script
exec(code, module.__dict__)
File "/Users/jimmygunawan/text2video/ez-text2video/app.py", line 124, in <module>
main()
File "/Users/jimmygunawan/text2video/ez-text2video/app.py", line 102, in main
raw_video = generate(
File "/Users/jimmygunawan/text2video/ez-text2video/lib/generate.py", line 67, in generate
video = pipeline(
File "/Users/jimmygunawan/miniconda3/envs/t2v/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/Users/jimmygunawan/miniconda3/envs/t2v/lib/python3.10/site-packages/diffusers/pipelines/text_to_video_synthesis/pipeline_text_to_video_synth.py", line 634, in __call__
noise_pred = self.unet(
File "/Users/jimmygunawan/miniconda3/envs/t2v/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/Users/jimmygunawan/miniconda3/envs/t2v/lib/python3.10/site-packages/diffusers/models/unet_3d_condition.py", line 474, in forward
sample, res_samples = downsample_block(
File "/Users/jimmygunawan/miniconda3/envs/t2v/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/Users/jimmygunawan/miniconda3/envs/t2v/lib/python3.10/site-packages/diffusers/models/unet_3d_blocks.py", line 373, in forward
hidden_states = temp_conv(hidden_states, num_frames=num_frames)
File "/Users/jimmygunawan/miniconda3/envs/t2v/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/Users/jimmygunawan/miniconda3/envs/t2v/lib/python3.10/site-packages/diffusers/models/resnet.py", line 829, in forward
hidden_states = self.conv1(hidden_states)
File "/Users/jimmygunawan/miniconda3/envs/t2v/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/Users/jimmygunawan/miniconda3/envs/t2v/lib/python3.10/site-packages/torch/nn/modules/container.py", line 217, in forward
input = module(input)
File "/Users/jimmygunawan/miniconda3/envs/t2v/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "/Users/jimmygunawan/miniconda3/envs/t2v/lib/python3.10/site-packages/torch/nn/modules/conv.py", line 613, in forward
return self._conv_forward(input, self.weight, self.bias)
File "/Users/jimmygunawan/miniconda3/envs/t2v/lib/python3.10/site-packages/torch/nn/modules/conv.py", line 608, in _conv_forward
return F.conv3d(
RuntimeError: Conv3D is not supported on MPS
Tried the procedure above.
It's still crashing with error message:
loc("varianceEps"("(mpsFileLoc): /AppleInternal/Library/BuildRoots/97f6331a-ba75-11ed-a4bc-863efbbaf80d/Library/Caches/com.apple.xbs/Sources/MetalPerformanceShadersGraph/mpsgraph/MetalPerformanceShadersGraph/Core/Files/MPSGraphUtilities.mm":228:0)): error: input types 'tensor<1x77x1xf16>' and 'tensor<1xf32>' are not broadcast compatible
LLVM ERROR: Failed to infer result type(s).
zsh: abort streamlit run app.py
I am experiencing the same issue here
@Macavity77 did you try using the fix-mps branch?
I believe there are two separate Pytorch bugs to deal with here:
I switched the precision to fp32 in the fix-mps
branch, so it should work with Pytorch stable (2.0.0).
using PyTorch stable (2.0.0) and on the fix-mps branch but still getting Conv3D error (Mac M1)
@rudyhar you're getting the Conv3D error on Pytorch 2.0.0?
@kpthedev Yes, also on python 3.10.9. Cheers
I kept getting error:
Can this repo works with MPS macOS?