Open Vargol opened 4 weeks ago
I can confirm this exact issue on a different system:
{
"accelerate": "0.30.1",
"compel": "2.0.2",
"cuda": null,
"diffusers": "0.27.2",
"numpy": "1.26.4",
"opencv": "4.9.0.80",
"onnx": "1.16.1",
"pillow": "11.0.0",
"python": "3.10.12",
"torch": "2.4.1+rocm6.1",
"torchvision": "0.19.1+rocm6.1",
"transformers": "4.41.1",
"xformers": null
}
Additional context: I manually installed AMD's fork of bitsandbytes with ROCm support because the version installed by the installer script would throw an exception about not detecting a CUDA device.
Does anyone have recommendations / workarounds for this? Sorry I’m newbie and can’t read between lines very well
Assuming you have a modern GPU or Apple Silicon add the line
precision: bfloat16
to your invoke.yaml file, or change the existing line if there is one. Then start InvokeAI or restart it if it was running.
This will have a very minor effect on your renders if 100% reproducabilty from before you made the change is important to you.
For example the yaml for the environment I tested this in looks like this...
# Internal metadata - do not edit:
schema_version: 4.0.2
# Put user settings here - see https://invoke-ai.github.io/InvokeAI/features/CONFIGURATION/:
outputs_dir: /Users/Vargol/invokeai/outputs
vram: 1
device: mps
precision: bfloat16
Is there an existing issue for this problem?
Operating system
macOS
GPU vendor
Apple Silicon (MPS)
GPU model
M3 base 10 GPU revision
GPU VRAM
24
Version number
v5.3.0
Browser
Safari Version 18.0.1 (20619.1.26.31.7)
Python dependencies
{ "accelerate": "0.30.1", "compel": "2.0.2", "cuda": null, "diffusers": "0.27.2", "numpy": "1.26.4", "opencv": "4.9.0.80", "onnx": "1.16.1", "pillow": "11.0.0", "python": "3.11.10", "torch": "2.4.1", "torchvision": "0.19.1", "transformers": "4.41.1", "xformers": null }
What happened
Render a quick Flux schnell image with InvokeAI precision detection to float16 and the image generated as normal , the previews showing up just fine but the final image was black. The same happens if the precision is not set in invokeai.yaml If the precision is set to bfloat16 I get a successful render
What you expected to happen
I render a Flux based image and get a final image that isn't all black
How to reproduce the problem
Set the precision Start the app (this might only be an issue on a MacOS Apple Silicon system) , chose a Flux model and typical settings for Flux, press InvokeAI and wait for the image to be rendered.
Additional context
The black images was initially bought up on discord by another user, I've replicated it by changing my setting from bfloat16 to float16. No CUDA system to test on so it may be an MPS only issue, but as VAE's not working with float16 seems to be a recurring theme it probably isn't.
Discord username
No response