ddPn08 / Radiata

Stable diffusion webui based on diffusers.
https://ddpn08.github.io/Radiata/
Apache License 2.0
984 stars 68 forks source link

Weird torch error with IF #91

Closed tildebyte closed 1 year ago

tildebyte commented 1 year ago

Describe the bug

Best to just paste the traceback

Downloading shards: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 1998.72it/s]
Loading checkpoint shards: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:02<00:00,  1.36s/it]
Traceback (most recent call last):
  File "D:\IF\.venv\lib\site-packages\gradio\routes.py", line 399, in run_predict
    output = await app.get_blocks().process_api(
  File "D:\IF\.venv\lib\site-packages\gradio\blocks.py", line 1299, in process_api
    result = await self.call_function(
  File "D:\IF\.venv\lib\site-packages\gradio\blocks.py", line 1036, in call_function
    prediction = await anyio.to_thread.run_sync(
  File "D:\IF\.venv\lib\site-packages\anyio\to_thread.py", line 31, in run_sync
    return await get_asynclib().run_sync_in_worker_thread(
  File "D:\IF\.venv\lib\site-packages\anyio\_backends\_asyncio.py", line 937, in run_sync_in_worker_thread
    return await future
  File "D:\IF\.venv\lib\site-packages\anyio\_backends\_asyncio.py", line 867, in run
    result = context.run(func, *args)
  File "D:\IF\.venv\lib\site-packages\gradio\utils.py", line 488, in async_iteration
    return next(iterator)
  File "D:\Programs\_GFX tools\Radiata\modules\tabs\deepfloyd_if.py", line 62, in generate_image
    for data in fn(
  File "D:\Programs\_GFX tools\Radiata\modules\diffusion\pipelines\deepfloyd_if.py", line 193, in stage_I
    prompt_embeds, negative_prompt_embeds = self._encode_prompt(
  File "D:\Programs\_GFX tools\Radiata\modules\diffusion\pipelines\deepfloyd_if.py", line 166, in _encode_prompt
    self.load_pipeline("I", "t5")
  File "D:\Programs\_GFX tools\Radiata\modules\diffusion\pipelines\deepfloyd_if.py", line 121, in load_pipeline
    ).to(self.device[0])
  File "D:\IF\.venv\lib\site-packages\transformers\modeling_utils.py", line 1896, in to
    return super().to(*args, **kwargs)
  File "D:\IF\.venv\lib\site-packages\torch\nn\modules\module.py", line 1145, in to
    return self._apply(convert)
  File "D:\IF\.venv\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
    module._apply(fn)
  File "D:\IF\.venv\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
    module._apply(fn)
  File "D:\IF\.venv\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
    module._apply(fn)
  [Previous line repeated 4 more times]
  File "D:\IF\.venv\lib\site-packages\torch\nn\modules\module.py", line 820, in _apply
    param_applied = fn(param)
  File "D:\IF\.venv\lib\site-packages\torch\nn\modules\module.py", line 1143, in convert
    return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
NotImplementedError: Cannot copy out of meta tensor; no data!
[INFO] HTTP Request: POST http://127.0.0.1:7860/api/predict "HTTP/1.1 500 Internal Server Error"
[INFO] HTTP Request: POST http://127.0.0.1:7860/reset "HTTP/1.1 200 OK"

Reproduction

Try to run Deep Floyd IF Stage I inference - any prompt, any setting for "mode"

Expected behavior

Stage I generation succeeds without error

System Info

You should probably include a Python script to dump the desired info...

Additional context

Commentary: I understand that 8G VRAM is probably not enough to run inference with IF, but I'd expect an OOM, not... whatever it is that I got 😁

Validations

Jon-Zbw commented 1 year ago

i met the same trouble,did you solve it???

ddPn08 commented 1 year ago

Sorry for leaving it for a while. Sorry, I gave up on the Deepfloyd IF implementation. We are moving towards SDXL support instead.