radames / Real-Time-Latent-Consistency-Model

App showcasing multiple real-time diffusion models pipelines with Diffusers
https://huggingface.co/spaces/radames/Real-Time-Latent-Consistency-Model
Apache License 2.0
873 stars 102 forks source link

torch.compile not work under windows? #8

Closed DavideAlidosi closed 1 year ago

DavideAlidosi commented 1 year ago

I'm try to use the torch.compile option to improve my performance, but the system give me this error:

device: cuda
Loading pipeline components...: 100%|##########| 5/5 [00:00<00:00, 11.23it/s]
Process SpawnProcess-1:
Traceback (most recent call last):
  File "C:\Users\indrema\AppData\Local\Programs\Python\Python310\lib\multiprocessing\process.py", line 314, in _bootstrap
    self.run()
  File "C:\Users\indrema\AppData\Local\Programs\Python\Python310\lib\multiprocessing\process.py", line 108, in run
    self._target(*self._args, **self._kwargs)
  File "L:\Real-Time-Latent-Consistency-Model\venv\lib\site-packages\uvicorn\_subprocess.py", line 76, in subprocess_started
    target(sockets=sockets)
  File "L:\Real-Time-Latent-Consistency-Model\venv\lib\site-packages\uvicorn\server.py", line 61, in run
    return asyncio.run(self.serve(sockets=sockets))
  File "C:\Users\indrema\AppData\Local\Programs\Python\Python310\lib\asyncio\runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "C:\Users\indrema\AppData\Local\Programs\Python\Python310\lib\asyncio\base_events.py", line 646, in run_until_complete
    return future.result()
  File "L:\Real-Time-Latent-Consistency-Model\venv\lib\site-packages\uvicorn\server.py", line 68, in serve
    config.load()
  File "L:\Real-Time-Latent-Consistency-Model\venv\lib\site-packages\uvicorn\config.py", line 467, in load
    self.loaded_app = import_from_string(self.app)
  File "L:\Real-Time-Latent-Consistency-Model\venv\lib\site-packages\uvicorn\importer.py", line 21, in import_from_string
    module = importlib.import_module(module_str)
  File "C:\Users\indrema\AppData\Local\Programs\Python\Python310\lib\importlib\__init__.py", line 126, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
  File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
  File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
  File "<frozen importlib._bootstrap_external>", line 883, in exec_module
  File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
  File "L:\Real-Time-Latent-Consistency-Model\app-controlnet.py", line 110, in <module>
    pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True)
  File "L:\Real-Time-Latent-Consistency-Model\venv\lib\site-packages\torch\__init__.py", line 1723, in compile
    return torch._dynamo.optimize(backend=backend, nopython=fullgraph, dynamic=dynamic, disable=disable)(model)
  File "L:\Real-Time-Latent-Consistency-Model\venv\lib\site-packages\torch\_dynamo\eval_frame.py", line 583, in optimize
    check_if_dynamo_supported()
  File "L:\Real-Time-Latent-Consistency-Model\venv\lib\site-packages\torch\_dynamo\eval_frame.py", line 535, in check_if_dynamo_supported
    raise RuntimeError("Windows not yet supported for torch.compile")
RuntimeError: Windows not yet supported for torch.compile

Actually I'm under Windows 10 and I have a RTX 3090. TNX for support

radames commented 1 year ago

Hi are you using WSL to run it?

DavideAlidosi commented 1 year ago

I don’t think so.

radames commented 1 year ago

sorry I don't think torch.compile works on windows yet https://github.com/pytorch/pytorch/issues/90768