PS F:\AI\LCM-Realtime\Real-Time-Latent-Consistency-Model> uvicorn "app-controlnet:app" --host 0.0.0.0 --port 7860 --reload
INFO: Will watch for changes in these directories: ['F:\AI\LCM-Realtime\Real-Time-Latent-Consistency-Model']
INFO: Uvicorn running on http://0.0.0.0:7860 (Press CTRL+C to quit)
INFO: Started reloader process [47332] using StatReload
WARNING[XFORMERS]: xFormers can't load C++/CUDA extensions. xFormers was built for:
PyTorch 2.1.0+cu121 with CUDA 1201 (you have 2.1.0+cpu)
Python 3.11.6 (you have 3.11.0)
Please reinstall xformers (see https://github.com/facebookresearch/xformers#installing-xformers)
Memory-efficient attention, SwiGLU, sparse and more won't be available.
Set XFORMERS_MORE_DETAILS=1 for more details
TIMEOUT: 0.0
SAFETY_CHECKER: None
MAX_QUEUE_SIZE: 0
device: cpu
Loading pipeline components...: 100%|████████████████████████████████████████████████████| 5/5 [00:00<00:00, 24.82it/s]
Pipelines loaded with dtype=torch.float16 cannot run with cpu device. It is not recommended to move them to cpu as running them will fail. Please make sure to use an accelerator to run the pipeline in inference, due to the lack of support forfloat16 operations on this device in PyTorch. Please, remove the torch_dtype=torch.float16 argument, or use another device for inference.
Pipelines loaded with dtype=torch.float16 cannot run with cpu device. It is not recommended to move them to cpu as running them will fail. Please make sure to use an accelerator to run the pipeline in inference, due to the lack of support forfloat16 operations on this device in PyTorch. Please, remove the torch_dtype=torch.float16 argument, or use another device for inference.
Pipelines loaded with dtype=torch.float16 cannot run with cpu device. It is not recommended to move them to cpu as running them will fail. Please make sure to use an accelerator to run the pipeline in inference, due to the lack of support forfloat16 operations on this device in PyTorch. Please, remove the torch_dtype=torch.float16 argument, or use another device for inference.
Pipelines loaded with dtype=torch.float16 cannot run with cpu device. It is not recommended to move them to cpu as running them will fail. Please make sure to use an accelerator to run the pipeline in inference, due to the lack of support forfloat16 operations on this device in PyTorch. Please, remove the torch_dtype=torch.float16 argument, or use another device for inference.
Pipelines loaded with dtype=torch.float16 cannot run with cpu device. It is not recommended to move them to cpu as running them will fail. Please make sure to use an accelerator to run the pipeline in inference, due to the lack of support forfloat16 operations on this device in PyTorch. Please, remove the torch_dtype=torch.float16 argument, or use another device for inference.
Pipelines loaded with dtype=torch.float16 cannot run with cpu device. It is not recommended to move them to cpu as running them will fail. Please make sure to use an accelerator to run the pipeline in inference, due to the lack of support forfloat16 operations on this device in PyTorch. Please, remove the torch_dtype=torch.float16 argument, or use another device for inference.
Pipelines loaded with dtype=torch.float16 cannot run with cpu device. It is not recommended to move them to cpu as running them will fail. Please make sure to use an accelerator to run the pipeline in inference, due to the lack of support forfloat16 operations on this device in PyTorch. Please, remove the torch_dtype=torch.float16 argument, or use another device for inference.
Pipelines loaded with dtype=torch.float16 cannot run with cpu device. It is not recommended to move them to cpu as running them will fail. Please make sure to use an accelerator to run the pipeline in inference, due to the lack of support forfloat16 operations on this device in PyTorch. Please, remove the torch_dtype=torch.float16 argument, or use another device for inference.
INFO: Started server process [48392]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: 127.0.0.1:65255 - "GET /queue_size HTTP/1.1" 200 OK
INFO: 127.0.0.1:65261 - "GET /queue_size HTTP/1.1" 200 OK
I get the above error when trying to launch the controlnet app
PS F:\AI\LCM-Realtime\Real-Time-Latent-Consistency-Model> uvicorn "app-controlnet:app" --host 0.0.0.0 --port 7860 --reload INFO: Will watch for changes in these directories: ['F:\AI\LCM-Realtime\Real-Time-Latent-Consistency-Model'] INFO: Uvicorn running on http://0.0.0.0:7860 (Press CTRL+C to quit) INFO: Started reloader process [47332] using StatReload WARNING[XFORMERS]: xFormers can't load C++/CUDA extensions. xFormers was built for: PyTorch 2.1.0+cu121 with CUDA 1201 (you have 2.1.0+cpu) Python 3.11.6 (you have 3.11.0) Please reinstall xformers (see https://github.com/facebookresearch/xformers#installing-xformers) Memory-efficient attention, SwiGLU, sparse and more won't be available. Set XFORMERS_MORE_DETAILS=1 for more details TIMEOUT: 0.0 SAFETY_CHECKER: None MAX_QUEUE_SIZE: 0 device: cpu Loading pipeline components...: 100%|████████████████████████████████████████████████████| 5/5 [00:00<00:00, 24.82it/s] Pipelines loaded with
dtype=torch.float16
cannot run withcpu
device. It is not recommended to move them tocpu
as running them will fail. Please make sure to use an accelerator to run the pipeline in inference, due to the lack of support forfloat16
operations on this device in PyTorch. Please, remove thetorch_dtype=torch.float16
argument, or use another device for inference. Pipelines loaded withdtype=torch.float16
cannot run withcpu
device. It is not recommended to move them tocpu
as running them will fail. Please make sure to use an accelerator to run the pipeline in inference, due to the lack of support forfloat16
operations on this device in PyTorch. Please, remove thetorch_dtype=torch.float16
argument, or use another device for inference. Pipelines loaded withdtype=torch.float16
cannot run withcpu
device. It is not recommended to move them tocpu
as running them will fail. Please make sure to use an accelerator to run the pipeline in inference, due to the lack of support forfloat16
operations on this device in PyTorch. Please, remove thetorch_dtype=torch.float16
argument, or use another device for inference. Pipelines loaded withdtype=torch.float16
cannot run withcpu
device. It is not recommended to move them tocpu
as running them will fail. Please make sure to use an accelerator to run the pipeline in inference, due to the lack of support forfloat16
operations on this device in PyTorch. Please, remove thetorch_dtype=torch.float16
argument, or use another device for inference. Pipelines loaded withdtype=torch.float16
cannot run withcpu
device. It is not recommended to move them tocpu
as running them will fail. Please make sure to use an accelerator to run the pipeline in inference, due to the lack of support forfloat16
operations on this device in PyTorch. Please, remove thetorch_dtype=torch.float16
argument, or use another device for inference. Pipelines loaded withdtype=torch.float16
cannot run withcpu
device. It is not recommended to move them tocpu
as running them will fail. Please make sure to use an accelerator to run the pipeline in inference, due to the lack of support forfloat16
operations on this device in PyTorch. Please, remove thetorch_dtype=torch.float16
argument, or use another device for inference. Pipelines loaded withdtype=torch.float16
cannot run withcpu
device. It is not recommended to move them tocpu
as running them will fail. Please make sure to use an accelerator to run the pipeline in inference, due to the lack of support forfloat16
operations on this device in PyTorch. Please, remove thetorch_dtype=torch.float16
argument, or use another device for inference. Pipelines loaded withdtype=torch.float16
cannot run withcpu
device. It is not recommended to move them tocpu
as running them will fail. Please make sure to use an accelerator to run the pipeline in inference, due to the lack of support forfloat16
operations on this device in PyTorch. Please, remove thetorch_dtype=torch.float16
argument, or use another device for inference. INFO: Started server process [48392] INFO: Waiting for application startup. INFO: Application startup complete. INFO: 127.0.0.1:65255 - "GET /queue_size HTTP/1.1" 200 OK INFO: 127.0.0.1:65261 - "GET /queue_size HTTP/1.1" 200 OKI get the above error when trying to launch the controlnet app