Closed lstein closed 1 year ago
Do you mean that the images written to disk are black, or that they display as black in the UI only?
If they display as black in the UI, but are normal on disk, does refreshing get them to show up correctly?
If on the UI, is this for the larger image display areas, or the gallery?
The images written to disk are black. This appears to have been sporadic. I haven't been able to reproduce the behavior since last night.
Can you upload one of the images here? Maybe there are some clues in the image.
The system I experienced the black images with crashed a couple of hours after I experienced the behavior, so I suspect it was a hardware problem. As soon as it comes back up, I'll retrieve a couple of examples and post them.
Here's a typical example:
Hello, I've just install InvokeIa for the first time and I have the same issue. (Black images on disk) I didn't try with CLI tho.
@Piioupiou Sorry to hear that! Is it every image it just some?
Can you please copy and paste the output from your terminal, starting from "image generation requested" for an image that turned out black?
@psychedelicious It's all images, here is an exemple (In CLI)
./invoke.sh -s 10 --no-nsfw_checker --no_restore --no_upscale --precision auto
Do you want to generate images using the
1. command-line
2. browser-based UI
3. run textual inversion training
4. merge models (diffusers type only)
5. open the developer console
6. re-run the configure script to download new models
7. command-line help
Please enter 1, 2, 3, 4, 5, 6 or 7: [2] 1
Starting the InvokeAI command-line...
* Initializing, be patient...
>> Initialization file /home/piou/Documents/InvokeAI-Installer/invokeai.init found. Loading...
>> Internet connectivity is True
>> InvokeAI, version 2.3.0
>> InvokeAI runtime directory is "/home/piou/Documents/InvokeAI-Installer"
>> Face restoration and upscaling disabled
>> Using device_type cuda
>> xformers memory-efficient attention is available and enabled
>> Current VRAM usage: 0.00G
>> Loading diffusers model from runwayml/stable-diffusion-v1-5
| Using faster float16 precision
| Loading diffusers VAE from stabilityai/sd-vae-ft-mse
Fetching 15 files: 100%|████████████████████| 15/15 [00:00<00:00, 283398.92it/s]
| Default image dimensions = 512 x 512
>> Model loaded in 7.43s
>> Max VRAM used to load the model: 2.16G
>> Current VRAM usage:2.16G
>> Textual inversions available:
>> Setting Sampler to k_lms (LMSDiscreteScheduler)
* Initialization done! Awaiting your command (-h for help, 'q' to quit)
(stable-diffusion-1.5) invoke> cat
>> Patchmatch initialized
100%|███████████████████████████████████████████| 10/10 [00:39<00:00, 3.95s/it]
Generating: 100%|█████████████████████████████████| 1/1 [00:44<00:00, 44.11s/it]
>> Usage stats:
>> 1 image(s) generated in 45.10s
>> Max VRAM used for this generation: 2.80G. Current VRAM utilization: 2.16G
>> Max VRAM used since script start: 2.80G
Outputs:
[17] /home/piou/Documents/InvokeAI-Installer/outputs/000012.907876724.png: cat -s 10 -S 907876724 -W 512 -H 512 -C 7.5 -A k_lms
And the output :
I've tried with float32 and got an error :
./invoke.sh -s 10 --no-nsfw_checker --no_restore --no_upscale --precision=float32
* Initializing, be patient...
>> Initialization file /home/piou/Documents/InvokeAI-Installer/invokeai.init found. Loading...
>> Internet connectivity is True
>> InvokeAI, version 2.3.0
>> InvokeAI runtime directory is "/home/piou/Documents/InvokeAI-Installer"
>> Face restoration and upscaling disabled
>> Using device_type cuda
>> xformers memory-efficient attention is available and enabled
>> Current VRAM usage: 0.00G
>> Loading diffusers model from runwayml/stable-diffusion-v1-5
| Using more accurate float32 precision
| Loading diffusers VAE from stabilityai/sd-vae-ft-mse
| Using more accurate float32 precision
Fetching 15 files: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 15/15 [00:00<00:00, 322638.77it/s]
** model stable-diffusion-1.5 could not be loaded: CUDA out of memory. Tried to allocate 16.00 MiB (GPU 0; 3.81 GiB total capacity; 3.23 GiB already allocated; 4.44 MiB free; 3.29 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
Traceback (most recent call last):
File "/home/piou/Documents/InvokeAI-Installer/.venv/lib/python3.10/site-packages/ldm/generate.py", line 889, in set_model
model_data = cache.get_model(model_name)
File "/home/piou/Documents/InvokeAI-Installer/.venv/lib/python3.10/site-packages/ldm/invoke/model_manager.py", line 106, in get_model
requested_model, width, height, hash = self._load_model(model_name)
File "/home/piou/Documents/InvokeAI-Installer/.venv/lib/python3.10/site-packages/ldm/invoke/model_manager.py", line 341, in _load_model
model, width, height, model_hash = self._load_diffusers_model(mconfig)
File "/home/piou/Documents/InvokeAI-Installer/.venv/lib/python3.10/site-packages/ldm/invoke/model_manager.py", line 532, in _load_diffusers_model
pipeline.to(self.device)
File "/home/piou/Documents/InvokeAI-Installer/.venv/lib/python3.10/site-packages/diffusers/pipelines/pipeline_utils.py", line 322, in to
module.to(torch_device)
File "/home/piou/Documents/InvokeAI-Installer/.venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 989, in to
return self._apply(convert)
File "/home/piou/Documents/InvokeAI-Installer/.venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 641, in _apply
module._apply(fn)
File "/home/piou/Documents/InvokeAI-Installer/.venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 641, in _apply
module._apply(fn)
File "/home/piou/Documents/InvokeAI-Installer/.venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 641, in _apply
module._apply(fn)
[Previous line repeated 2 more times]
File "/home/piou/Documents/InvokeAI-Installer/.venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 664, in _apply
param_applied = fn(param)
File "/home/piou/Documents/InvokeAI-Installer/.venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 987, in convert
return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 16.00 MiB (GPU 0; 3.81 GiB total capacity; 3.23 GiB already allocated; 4.44 MiB free; 3.29 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
** trying to reload previous model
>> Loading diffusers model from runwayml/stable-diffusion-v1-5
| Using more accurate float32 precision
| Loading diffusers VAE from stabilityai/sd-vae-ft-mse
| Using more accurate float32 precision
Fetching 15 files: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 15/15 [00:00<00:00, 42196.22it/s]
** An error occurred while attempting to initialize the model: "CUDA out of memory. Tried to allocate 146.00 MiB (GPU 0; 3.81 GiB total capacity; 3.23 GiB already allocated; 4.44 MiB free; 3.29 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF"
** This can be caused by a missing or corrupted models file, and can sometimes be fixed by (re)installing the models.
Do you want to run invokeai-configure script to select and/or reinstall models? [y] n
* Initialization done! Awaiting your command (-h for help, 'q' to quit)
(stable-diffusion-1.5) invoke> cat
>> Patchmatch initialized
>> Current VRAM usage: 3.47G
>> Loading diffusers model from runwayml/stable-diffusion-v1-5
| Using more accurate float32 precision
| Loading diffusers VAE from stabilityai/sd-vae-ft-mse
| Using more accurate float32 precision
Fetching 15 files: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 15/15 [00:00<00:00, 68015.74it/s]
** model stable-diffusion-1.5 could not be loaded: CUDA out of memory. Tried to allocate 16.00 MiB (GPU 0; 3.81 GiB total capacity; 3.23 GiB already allocated; 4.44 MiB free; 3.29 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
Traceback (most recent call last):
File "/home/piou/Documents/InvokeAI-Installer/.venv/lib/python3.10/site-packages/ldm/generate.py", line 889, in set_model
model_data = cache.get_model(model_name)
File "/home/piou/Documents/InvokeAI-Installer/.venv/lib/python3.10/site-packages/ldm/invoke/model_manager.py", line 106, in get_model
requested_model, width, height, hash = self._load_model(model_name)
File "/home/piou/Documents/InvokeAI-Installer/.venv/lib/python3.10/site-packages/ldm/invoke/model_manager.py", line 341, in _load_model
model, width, height, model_hash = self._load_diffusers_model(mconfig)
File "/home/piou/Documents/InvokeAI-Installer/.venv/lib/python3.10/site-packages/ldm/invoke/model_manager.py", line 532, in _load_diffusers_model
pipeline.to(self.device)
File "/home/piou/Documents/InvokeAI-Installer/.venv/lib/python3.10/site-packages/diffusers/pipelines/pipeline_utils.py", line 322, in to
module.to(torch_device)
File "/home/piou/Documents/InvokeAI-Installer/.venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 989, in to
return self._apply(convert)
File "/home/piou/Documents/InvokeAI-Installer/.venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 641, in _apply
module._apply(fn)
File "/home/piou/Documents/InvokeAI-Installer/.venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 641, in _apply
module._apply(fn)
File "/home/piou/Documents/InvokeAI-Installer/.venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 641, in _apply
module._apply(fn)
[Previous line repeated 2 more times]
File "/home/piou/Documents/InvokeAI-Installer/.venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 664, in _apply
param_applied = fn(param)
File "/home/piou/Documents/InvokeAI-Installer/.venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 987, in convert
return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 16.00 MiB (GPU 0; 3.81 GiB total capacity; 3.23 GiB already allocated; 4.44 MiB free; 3.29 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
** trying to reload previous model
>> Loading diffusers model from runwayml/stable-diffusion-v1-5
| Using more accurate float32 precision
| Loading diffusers VAE from stabilityai/sd-vae-ft-mse
| Using more accurate float32 precision
Fetching 15 files: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 15/15 [00:00<00:00, 37161.58it/s]
>> An error occurred:
Traceback (most recent call last):
File "/home/piou/Documents/InvokeAI-Installer/.venv/lib/python3.10/site-packages/ldm/generate.py", line 889, in set_model
model_data = cache.get_model(model_name)
File "/home/piou/Documents/InvokeAI-Installer/.venv/lib/python3.10/site-packages/ldm/invoke/model_manager.py", line 106, in get_model
requested_model, width, height, hash = self._load_model(model_name)
File "/home/piou/Documents/InvokeAI-Installer/.venv/lib/python3.10/site-packages/ldm/invoke/model_manager.py", line 341, in _load_model
model, width, height, model_hash = self._load_diffusers_model(mconfig)
File "/home/piou/Documents/InvokeAI-Installer/.venv/lib/python3.10/site-packages/ldm/invoke/model_manager.py", line 532, in _load_diffusers_model
pipeline.to(self.device)
File "/home/piou/Documents/InvokeAI-Installer/.venv/lib/python3.10/site-packages/diffusers/pipelines/pipeline_utils.py", line 322, in to
module.to(torch_device)
File "/home/piou/Documents/InvokeAI-Installer/.venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 989, in to
return self._apply(convert)
File "/home/piou/Documents/InvokeAI-Installer/.venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 641, in _apply
module._apply(fn)
File "/home/piou/Documents/InvokeAI-Installer/.venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 641, in _apply
module._apply(fn)
File "/home/piou/Documents/InvokeAI-Installer/.venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 641, in _apply
module._apply(fn)
[Previous line repeated 2 more times]
File "/home/piou/Documents/InvokeAI-Installer/.venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 664, in _apply
param_applied = fn(param)
File "/home/piou/Documents/InvokeAI-Installer/.venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 987, in convert
return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 16.00 MiB (GPU 0; 3.81 GiB total capacity; 3.23 GiB already allocated; 4.44 MiB free; 3.29 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/piou/Documents/InvokeAI-Installer/.venv/lib/python3.10/site-packages/ldm/invoke/CLI.py", line 164, in main
main_loop(gen, opt)
File "/home/piou/Documents/InvokeAI-Installer/.venv/lib/python3.10/site-packages/ldm/invoke/CLI.py", line 394, in main_loop
gen.prompt2image(
File "/home/piou/Documents/InvokeAI-Installer/.venv/lib/python3.10/site-packages/ldm/generate.py", line 412, in prompt2image
model = self.set_model(self.model_name)
File "/home/piou/Documents/InvokeAI-Installer/.venv/lib/python3.10/site-packages/ldm/generate.py", line 896, in set_model
model_data = cache.get_model(previous_model_name) # load previous
File "/home/piou/Documents/InvokeAI-Installer/.venv/lib/python3.10/site-packages/ldm/invoke/model_manager.py", line 106, in get_model
requested_model, width, height, hash = self._load_model(model_name)
File "/home/piou/Documents/InvokeAI-Installer/.venv/lib/python3.10/site-packages/ldm/invoke/model_manager.py", line 341, in _load_model
model, width, height, model_hash = self._load_diffusers_model(mconfig)
File "/home/piou/Documents/InvokeAI-Installer/.venv/lib/python3.10/site-packages/ldm/invoke/model_manager.py", line 532, in _load_diffusers_model
pipeline.to(self.device)
File "/home/piou/Documents/InvokeAI-Installer/.venv/lib/python3.10/site-packages/diffusers/pipelines/pipeline_utils.py", line 322, in to
module.to(torch_device)
File "/home/piou/Documents/InvokeAI-Installer/.venv/lib/python3.10/site-packages/transformers/modeling_utils.py", line 1749, in to
return super().to(*args, **kwargs)
File "/home/piou/Documents/InvokeAI-Installer/.venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 989, in to
return self._apply(convert)
File "/home/piou/Documents/InvokeAI-Installer/.venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 641, in _apply
module._apply(fn)
File "/home/piou/Documents/InvokeAI-Installer/.venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 641, in _apply
module._apply(fn)
File "/home/piou/Documents/InvokeAI-Installer/.venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 641, in _apply
module._apply(fn)
File "/home/piou/Documents/InvokeAI-Installer/.venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 664, in _apply
param_applied = fn(param)
File "/home/piou/Documents/InvokeAI-Installer/.venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 987, in convert
return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 146.00 MiB (GPU 0; 3.81 GiB total capacity; 3.23 GiB already allocated; 4.44 MiB free; 3.29 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
When asked:
Do you want to run invokeai-configure script to select and/or reinstall models?
I tried yes and then recommander models, but it's still throw the same error ... And then ask again to select/reinstall models.
Thank you for your help
@Piioupiou Looks like you have 4GB VRAM. Is that accurate? What OS and video card?
Yes :
sudo lshw -C video ruby-3.1.2 22:44:02
Place your right index finger on the fingerprint reader
*-display
description: VGA compatible controller
product: TigerLake-LP GT2 [Iris Xe Graphics]
vendor: Intel Corporation
physical id: 2
bus info: pci@0000:00:02.0
logical name: /dev/fb0
version: 01
width: 64 bits
clock: 33MHz
capabilities: pciexpress msi pm vga_controller bus_master cap_list rom fb
configuration: depth=32 driver=i915 latency=0 mode=1920x1080 resolution=1920,1080 visual=truecolor xres=1920 yres=1080
resources: iomemory:600-5ff iomemory:400-3ff irq:187 memory:6052000000-6052ffffff memory:4000000000-400fffffff ioport:4000(size=64) memory:c0000-dffff memory:4010000000-4016ffffff memory:4020000000-40ffffffff
*-display
description: 3D controller
product: TU117GLM [Quadro T500 Mobile]
vendor: NVIDIA Corporation
physical id: 0
bus info: pci@0000:01:00.0
version: a1
width: 64 bits
clock: 33MHz
capabilities: pm msi pciexpress bus_master cap_list rom
configuration: driver=nvidia latency=0
resources: iomemory:600-5ff iomemory:600-5ff irq:188 memory:bd000000-bdffffff memory:6040000000-604fffffff memory:6050000000-6051ffffff ioport:3000(size=128)
nvidia-smi ruby-3.1.2 22:44:33
Sun Feb 26 22:45:11 2023
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 525.85.12 Driver Version: 525.85.12 CUDA Version: 12.0 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 NVIDIA T500 On | 00000000:01:00.0 Off | N/A |
| N/A 47C P0 N/A / N/A | 5MiB / 4096MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| 0 N/A N/A 2170 G /usr/lib/xorg/Xorg 4MiB |
+-----------------------------------------------------------------------------+
At 4GB you are right on the edge of what our implementation needs (probably below it, I'm sorry I can't remember the exact memory characteristics at this point). When you forced fp32 you did hit that limit and got an OOM error which is expected.
Not clear what the issue with fp16 is. Looks like your card supports it per my brief search.
Could be xformers. There's a cli flag for the invokeai
command to disable it, iirc --no_xformers
. Does that change anything (with fp16)?
Ok good to know, thanks !
--no_xformers
is not recognized
./invoke.sh -s 10 --no-nsfw_checker --no_restore --no_upscale --no_xformers ruby-3.1.2 23:01:36
Do you want to generate images using the
1. command-line
2. browser-based UI
3. run textual inversion training
4. merge models (diffusers type only)
5. open the developer console
6. re-run the configure script to download new models
7. command-line help
Please enter 1, 2, 3, 4, 5, 6 or 7: [2] 1
Starting the InvokeAI command-line...
usage: invokeai [-h] [--laion400m LAION400M] [--weights WEIGHTS] [--version] [--root_dir ROOT_DIR] [--config CONF] [--model MODEL] [--weight_dirs WEIGHT_DIRS [WEIGHT_DIRS ...]]
[--png_compression {0,1,2,3,4,5,6,7,8}] [-F] [--max_loaded_models MAX_LOADED_MODELS] [--free_gpu_mem] [--xformers | --no-xformers] [--always_use_cpu]
[--precision PRECISION] [--ckpt_convert | --no-ckpt_convert] [--internet | --no-internet]
[--nsfw_checker | --no-nsfw_checker | --safety_checker | --no-safety_checker] [--autoconvert AUTOCONVERT] [--patchmatch | --no-patchmatch] [--from_file INFILE]
[--outdir OUTDIR] [--prompt_as_dir] [--fnformat FNFORMAT] [-s STEPS] [-W WIDTH] [-H HEIGHT] [-C CFG_SCALE] [--sampler SAMPLER_NAME] [--log_tokenization]
[-f STRENGTH] [-T | -fit | --fit | --no-fit] [--grid | --no-grid | -g] [--embedding_directory EMBEDDING_PATH] [--embeddings | --no-embeddings]
[--enable_image_debugging] [--karras_max KARRAS_MAX] [--no_restore] [--no_upscale] [--esrgan_bg_tile ESRGAN_BG_TILE] [--gfpgan_model_path GFPGAN_MODEL_PATH] [--web]
[--web_develop] [--web_verbose] [--cors [CORS ...]] [--host HOST] [--port PORT] [--certfile CERTFILE] [--keyfile KEYFILE] [--gui]
invokeai: error: unrecognized arguments: --no_xformers
@Piioupiou Sorry, I'm AFK and wasn't sure. The correct flag appears to be --no-xformers
, from the help text in the error you provided. Can you try that?
Sorry I was in a rush and didn't look it up. Thank's for your help ! Sadly it didn't work either:
./invoke.sh -s 10 --no-nsfw_checker --no_restore --no_upscale --no-xformers ruby-3.1.2 23:03:26
Do you want to generate images using the
1. command-line
2. browser-based UI
3. run textual inversion training
4. merge models (diffusers type only)
5. open the developer console
6. re-run the configure script to download new models
7. command-line help
Please enter 1, 2, 3, 4, 5, 6 or 7: [2] 1
Starting the InvokeAI command-line...
* Initializing, be patient...
>> Initialization file /home/piou/Documents/InvokeAI-Installer/invokeai.init found. Loading...
>> Internet connectivity is True
>> InvokeAI, version 2.3.0
>> InvokeAI runtime directory is "/home/piou/Documents/InvokeAI-Installer"
>> Face restoration and upscaling disabled
>> Using device_type cuda
>> xformers memory-efficient attention is available but disabled
>> Current VRAM usage: 0.00G
>> Loading diffusers model from runwayml/stable-diffusion-v1-5
| Using faster float16 precision
| Loading diffusers VAE from stabilityai/sd-vae-ft-mse
Fetching 15 files: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 15/15 [00:00<00:00, 295373.52it/s]
| Default image dimensions = 512 x 512
>> Model loaded in 6.30s
>> Max VRAM used to load the model: 2.16G
>> Current VRAM usage:2.16G
>> Textual inversions available:
>> Setting Sampler to k_lms (LMSDiscreteScheduler)
* Initialization done! Awaiting your command (-h for help, 'q' to quit)
(stable-diffusion-1.5) invoke> cat
>> Patchmatch initialized
100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 10/10 [00:46<00:00, 4.64s/it]
Generating: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:51<00:00, 51.13s/it]
>> Usage stats:
>> 1 image(s) generated in 52.09s
>> Max VRAM used for this generation: 2.80G. Current VRAM utilization: 2.16G
>> Max VRAM used since script start: 2.80G
Outputs:
[21] /home/piou/Documents/InvokeAI-Installer/outputs/000014.1904574997.png: cat -s 10 -S 1904574997 -W 512 -H 512 -C 7.5 -A k_lms
Producing :
@Piioupiou Well, shoot. I'm not sure what else to try. Have you successfully generated images using another SD project on this machine?
It's the first project I've ever tried, so nope :/ I'll will try it on my desktop with a 1080ti (windows tho) maybe I'll have more luck. Thank you for your time !
Sorry we couldn't help more in this case. Maybe the AUTOMATIC1111 project will work on this machine - if you try that out, please let us know if it works, because that indicates where the issue may be. Thanks for troubleshooting.
Regarding this GH issue, I suspect we have two different problems modes resulting in the same black output. When something goes wrong, often we get a black image. But this has not been an issue since the early days.
Ok, thank you, i'll try it and get back to you !
There has been no activity in this issue for 14 days. If this issue is still being experienced, please reply with an updated confirmation that the issue is still being experienced with the latest release.
@psychedelicious Hello, sorry I forgot to let you know ... AUTOMATIC1111 worked beautifully, I've got some errors when I tried to generate big images (not enough vram), beside that I managed to use it ! Thanks for your help !
Hi, I have the exact same problem with the exact same graphic card as @Piioupiou . It's working with others SD projects but it seems impossible with InvokeAI...
There has been no activity in this issue for 14 days. If this issue is still being experienced, please reply with an updated confirmation that the issue is still being experienced with the latest release.
I have been experiencing what seems to be the same issue. I have generated ~75 images with no problems but suddenly all images generated are just black. Watching the diffusion in the web UI, I can see it goes from noise to all black on the second step (consistently). I quit and restart Invoke and it continues to only generate black squares, but restarting the computer got it to create images for a while. Unfortunately, after a few runs, it starts just making black squares again.
I'm using a Mac with M2 Pro and 16Gb.
Not only does it generate all-black images, but apparently this one is nsfw... I generated 18 or so with this prompt and random seeds act -s35, I liked one, so I ran it at -s50, it looked better, ran it again at -s75 and got this. All the previous versions were NOT nsfw...
It seems like image generation works until I decide to go above -s50 or so, then (most of the time, but not all) I start getting just black. And once it starts giving me just black images, it never stops no matter what settings I use. Quitting and restarting invoke works sometimes, but about half the time even that doesn't break the all-black plague.
This issue is inactive, so if I don't get any action in a few days, I'm going to start a new thread.
@gogurtenjoyer see the last two comments from @minimart64 - does this issue ring a bell?
For the mac-specific stuff, I'm not sure - it hasn't happened to me. HOWEVER, there's a fix that people can try, thanks to TimCabbage on Discord:
go to the diffuser model, then unet/config.json
at the bottom there's a
"upcast_attention": false
change it to
"upcast_attention": true
Just to reiterate: this is for the sporadic 'black image' issue, not the one on Mac that seems to happen every time(?) but maybe it's worth a shot there, too (despite Mac being FP32 always).
ive just installed invokeai 3 for the first time, and tried SDXL - the image shows itself as generating and then at the end suddenly turns black. I have 16GB 4090 laptop GPU, no indication of OOM at any point on task manager.
This only appears to be the case for SDXL, and not SD1.5 for me
Make sure to set the VAE decode to FP32, or else use the 'fixed' FP16 VAE that's available online (sorry, don't have a link).
i also got this issue, is it already solved??? or any solution , im using 4080 and 16gb
@MrFries1111 Yes, it is solved. As described elsewhere in this issue, make sure you select the FP16 VAE model, or just set VAE precision to FP32. The SDXL FP16 VAE is installable via starter models tab in the model manager UI.
Is there an existing issue for this?
OS
Linux
GPU
cuda
VRAM
24G
What happened?
@psychedelicious
When using the WebUI I'm starting to see images that have just completed generation suddenly turn black. I've tried to reproduce the behavior on the CLI with the same model, prompt, sampler and seed, but can't reproduce the issue. I'll drop an update if I discover that the problem is in the backend.
Screenshots
No response
Additional context
No response
Contact Details
No response