Closed if-ai closed 1 year ago
Okay sorry it seems the next time I use it, it have to be in sequential offload again stage It has picked up again
okay stage 2 passed but the last crash here is the report venv "E:\Radiata\venv\Scripts\Python.exe" Python 3.10.9 (tags/v3.10.9:1dd9be6, Dec 6 2022, 20:01:21) [MSC v.1934 64 bit (AMD64)] Commit hash: 57edbd76d610efd03c79ff1c2eb1e5c638458e63 Installing requirements [WARNING] A matching Triton is not available, some optimizations will not be enabled. Error caught was: No module named 'triton' Running on local URL: http://127.0.0.1:7860
To create a public link, set share=True
in launch()
.
[INFO] HTTP Request: POST http://127.0.0.1:7860/api/predict "HTTP/1.1 200 OK"
Downloading shards: 100%|███████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 999.48it/s]
Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████| 2/2 [00:31<00:00, 15.91s/it]
watermarker\diffusion_pytorch_model.safetensors not found
The config attributes {'lambda_min_clipped': -5.1} were passed to DDPMScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file.
100%|██████████████████████████████████████████████████████████████████████████████████| 50/50 [00:10<00:00, 4.79it/s]
[INFO] HTTP Request: POST http://127.0.0.1:7860/api/predict "HTTP/1.1 200 OK"
[INFO] HTTP Request: POST http://127.0.0.1:7860/api/predict "HTTP/1.1 200 OK"
[INFO] HTTP Request: POST http://127.0.0.1:7860/reset "HTTP/1.1 200 OK"
[INFO] HTTP Request: POST http://127.0.0.1:7860/api/predict "HTTP/1.1 200 OK"
watermarker\diffusion_pytorch_model.safetensors not found
The config attributes {'lambda_min_clipped': -5.1} were passed to DDPMScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file.
text_config_dict
is provided which will be used to initialize CLIPTextConfig
. The value text_config["id2label"]
will be overriden.
The config attributes {'encoder_hid_dim_type': 'text_proj'} were passed to UNet2DConditionModel, but are not expected and will be ignored. Please verify your config.json configuration file.
100%|██████████████████████████████████████████████████████████████████████████████████| 50/50 [00:23<00:00, 2.12it/s]
[INFO] HTTP Request: POST http://127.0.0.1:7860/api/predict "HTTP/1.1 200 OK"
[INFO] HTTP Request: POST http://127.0.0.1:7860/api/predict "HTTP/1.1 200 OK"
[INFO] HTTP Request: POST http://127.0.0.1:7860/reset "HTTP/1.1 200 OK"
[INFO] HTTP Request: POST http://127.0.0.1:7860/api/predict "HTTP/1.1 200 OK"
text_encoder\model.safetensors not found
100%|██████████████████████████████████████████████████████████████████████████████████| 50/50 [00:28<00:00, 1.77it/s]
Traceback (most recent call last):
File "E:\Radiata\venv\lib\site-packages\gradio\routes.py", line 422, in run_predict
output = await app.get_blocks().process_api(
File "E:\Radiata\venv\lib\site-packages\gradio\blocks.py", line 1323, in process_api
result = await self.call_function(
File "E:\Radiata\venv\lib\site-packages\gradio\blocks.py", line 1067, in call_function
prediction = await utils.async_iteration(iterator)
File "E:\Radiata\venv\lib\site-packages\gradio\utils.py", line 336, in async_iteration
return await iterator.anext()
File "E:\Radiata\venv\lib\site-packages\gradio\utils.py", line 329, in anext
return await anyio.to_thread.run_sync(
File "E:\Radiata\venv\lib\site-packages\anyio\to_thread.py", line 33, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "E:\Radiata\venv\lib\site-packages\anyio_backends_asyncio.py", line 877, in run_sync_in_worker_thread
return await future
File "E:\Radiata\venv\lib\site-packages\anyio_backends_asyncio.py", line 807, in run
result = context.run(func, args)
File "E:\Radiata\venv\lib\site-packages\gradio\utils.py", line 312, in run_sync_iterator_async
return next(iterator)
File "E:\Radiata\modules\tabs\deepfloyd_if.py", line 61, in generate_image
for data in fn(
File "E:\Radiata\modules\diffusion\pipelines\deepfloyd_if.py", line 243, in stage_III
images = self.IF_III(
File "E:\Radiata\venv\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
return func(args, kwargs)
File "E:\Radiata\venv\lib\site-packages\diffusers\pipelines\stable_diffusion\pipeline_stable_diffusion_upscale.py", line 716, in call
image = self.decode_latents(latents)
File "E:\Radiata\venv\lib\site-packages\diffusers\pipelines\stable_diffusion\pipeline_stable_diffusion_upscale.py", line 376, in decode_latents
image = self.vae.decode(latents).sample
File "E:\Radiata\venv\lib\site-packages\diffusers\utils\accelerate_utils.py", line 46, in wrapper
return method(self, *args, *kwargs)
File "E:\Radiata\venv\lib\site-packages\diffusers\models\autoencoder_kl.py", line 191, in decode
decoded = self._decode(z).sample
File "E:\Radiata\venv\lib\site-packages\diffusers\models\autoencoder_kl.py", line 178, in _decode
dec = self.decoder(z)
File "E:\Radiata\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(args, kwargs)
File "E:\Radiata\venv\lib\site-packages\accelerate\hooks.py", line 165, in new_forward
output = old_forward(*args, kwargs)
File "E:\Radiata\venv\lib\site-packages\diffusers\models\vae.py", line 233, in forward
sample = self.mid_block(sample)
File "E:\Radiata\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, *kwargs)
File "E:\Radiata\venv\lib\site-packages\accelerate\hooks.py", line 165, in new_forward
output = old_forward(args, kwargs)
File "E:\Radiata\venv\lib\site-packages\diffusers\models\unet_2d_blocks.py", line 463, in forward
hidden_states = attn(hidden_states)
File "E:\Radiata\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, *kwargs)
File "E:\Radiata\venv\lib\site-packages\accelerate\hooks.py", line 165, in new_forward
output = old_forward(args, **kwargs)
File "E:\Radiata\venv\lib\site-packages\diffusers\models\attention.py", line 162, in forward
hidden_states = F.scaled_dot_product_attention(
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 16.00 GiB (GPU 0; 24.00 GiB total capacity; 18.87 GiB already allocated; 2.41 GiB free; 19.11 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
[INFO] HTTP Request: POST http://127.0.0.1:7860/api/predict "HTTP/1.1 500 Internal Server Error"
[INFO] HTTP Request: POST http://127.0.0.1:7860/reset "HTTP/1.1 200 OK"
Hey I forgot to say but I made a little tutorial on how to install and use Radiata TensorRT on windows I an waiting for the IF implementation to be fix to make the IF part so far I love the webui is super useful hope IF gets sorted out thanks https://youtu.be/CPQLM4D8B2A
Sorry for leaving it for a while. Sorry, I gave up on the Deepfloyd IF implementation. We are moving towards SDXL support instead.
Describe the bug
The IF implementation downloaded all the models correctly for each stage but when choosing auto offload the process was manually interrupted because it never finished and now the blob symlinks are unusable there is a hash name instead of safetensors files I try to run again in auto there is no way to reach the files since now there are only a bunch of blobs
https://drive.google.com/file/d/1bcvX_Mwgshd56OnT2RTS_N4qOWTPD1X8/view https://drive.google.com/file/d/1spHkUM7a6BtAO4ifQZU4P1ycQXSiIAil/view https://drive.google.com/file/d/1a2paDwVbirb2r1-5ut829XsOPZPGNdOB/view
did I lose 30GB of files?
Reproduction
input a prompt sequential offload on IF model stage 3 fails
Expected behavior
generates final output image
System Info
3.10 windows 10 3090 rtx
Additional context
No response
Validations