Closed edwios closed 9 months ago
Instead of hardcoded to use "cuda", it should either take it from A1111 or use torch.*.is_available() to determine the GPU to be used on the platform it is executing.
torch.*.is_available()
modules.devices exposes the inference device, we should use that IMO.
modules.devices
Should be fixed in https://github.com/0xbitches/sd-webui-lcm/commit/bddc54285be81b0c45320d6ba9edc8d93fe39806.
Instead of hardcoded to use "cuda", it should either take it from A1111 or use
torch.*.is_available()
to determine the GPU to be used on the platform it is executing.