Open duOCELOT opened 3 months ago
Thank you for using FastSD! LCM and LCM-LoRA are different LCM-LoRA uses LCM-LoRA model with any SD1.5 model to turn it into 4 step inference(at the time of inference). LCM is fused model or trained LCM/ADD model LCM Models can be configured at https://github.com/rupeshs/fastsdcpu/blob/main/configs/lcm-models.txt Check LCM-LoRA model documenation
Thank you for using FastSD! LCM and LCM-LoRA are different LCM-LoRA uses LCM-LoRA model with any SD1.5 model to turn it into 4 step inference(at the time of inference). LCM is fused model or trained LCM/ADD model LCM Models can be configured at https://github.com/rupeshs/fastsdcpu/blob/main/configs/lcm-models.txt Check LCM-LoRA model documenation
Got it. Im gonna test Loras.
Ive got nice results so far with this pipeline
Running on local URL: http://127.0.0.1:7860
To create a public link, set share=True
in launch()
.
Traceback (most recent call last):
File "/root/fastsdcpu/env/lib/python3.10/site-packages/gradio/queueing.py", line 501, in call_prediction
output = await route_utils.call_process_api(
File "/root/fastsdcpu/env/lib/python3.10/site-packages/gradio/route_utils.py", line 258, in call_process_api
output = await app.get_blocks().process_api(
File "/root/fastsdcpu/env/lib/python3.10/site-packages/gradio/blocks.py", line 1684, in process_api
result = await self.call_function(
File "/root/fastsdcpu/env/lib/python3.10/site-packages/gradio/blocks.py", line 1250, in call_function
prediction = await anyio.to_thread.run_sync(
File "/root/fastsdcpu/env/lib/python3.10/site-packages/anyio/to_thread.py", line 56, in run_sync
return await get_async_backend().run_sync_in_worker_thread(
File "/root/fastsdcpu/env/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 2177, in run_sync_in_worker_thread
return await future
File "/root/fastsdcpu/env/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 859, in run
result = context.run(func, args)
File "/root/fastsdcpu/env/lib/python3.10/site-packages/gradio/utils.py", line 750, in wrapper
response = f(args, kwargs)
File "/root/fastsdcpu/src/frontend/webui/lora_models_ui.py", line 57, in on_click_load_lora
settings.lora.path = lora_models_map[lora_name]
KeyError: ''
Torch datatype : torch.float32
{'controlnet': None,
'diffusion_task': 'text_to_image',
'dirs': {'controlnet': '/root/fastsdcpu/controlnet_models',
'lora': '/root/fastsdcpu/lora_models'},
'guidance_scale': 8.0,
'image_height': 512,
'image_width': 512,
'inference_steps': 1,
'init_image': None,
'lcm_lora': {'base_model_id': 'Lykon/dreamshaper-8',
'lcm_lora_id': 'rupeshs/hypersd-sd1-5-1-step-lora'},
'lcm_model_id': 'rupeshs/hyper-sd-sdxl-1-step',
'lora': {'enabled': False,
'fuse': False,
'models_dir': '/root/fastsdcpu/lora_models',
'path': '',
'weight': 0.5},
'negative_prompt': '',
'number_of_images': 1,
'openvino_lcm_model_id': 'rupeshs/sd-turbo-openvino',
'prompt': 'An illustration os a comic books character, humanoid cyborg '
'female',
'rebuild_pipeline': False,
'seed': 810824741,
'strength': 0.6,
'use_lcm_lora': False,
'use_offline_model': False,
'use_openvino': False,
'use_safety_checker': False,
'use_seed': False,
'use_tiny_auto_encoder': False}
** Init LCM Model pipeline - rupeshs/hyper-sd-sdxl-1-step
Couldn't connect to the Hub: (MaxRetryError('HTTPSConnectionPool(host=\'huggingface.co\', port=443): Max retries exceeded with url: /api/models/rupeshs/hyper-sd-sdxl-1-step (Caused by NameResolutionError("<urllib3.connection.HTTPSConnection object at 0x74c56397b0>: Failed to resolve \'huggingface.co\' ([Errno -3] Temporary failure in name resolution)"))'), '(Request ID: a762819b-b13f-4ad4-9113-544844bac788)').
Will try to load from local cache.
Loading pipeline components...: 71%|▋| 5/7 [00:03<00:01The config attributes {'interpolation_type': 'linear', 'skip_prk_steps': True, 'use_karras_sigmas': False, 'timesteps': 800} were passed to LCMScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file.
Loading pipeline components...: 100%|█| 7/7 [00:03<00:00
The config attributes {'interpolation_type': 'linear', 'skip_prk_steps': True, 'use_karras_sigmas': False, 'timesteps': 800} were passed to LCMScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file.
Pipeline : StableDiffusionXLPipeline {
"_class_name": "StableDiffusionXLPipeline",
"_diffusers_version": "0.30.0",
"_name_or_path": "rupeshs/hyper-sd-sdxl-1-step",
"feature_extractor": [
null,
null
],
"force_zeros_for_empty_prompt": true,
"image_encoder": [
null,
null
],
"scheduler": [
"diffusers",
"LCMScheduler"
],
"text_encoder": [
"transformers",
"CLIPTextModel"
],
"text_encoder_2": [
"transformers",
"CLIPTextModelWithProjection"
],
"tokenizer": [
"transformers",
"CLIPTokenizer"
],
"tokenizer_2": [
"transformers",
"CLIPTokenizer"
],
"unet": [
"diffusers",
"UNet2DConditionModel"
],
"vae": [
"diffusers",
"AutoencoderKL"
]
}
Active adapters : []
Not using LCM-LoRA so setting guidance_scale 1.0
The first timestep on the custom timestep schedule is 800, not self.config.num_train_timesteps - 1
: 999. You may get unexpected results when using this timestep schedule.
100%|█████████████████████| 1/1 [00:28<00:00, 28.04s/it]
Latency : 70.62 seconds
My phone got 18GB RAM and 500GB storage, no root Asus rog phone 6 pro
I cant generate beyond 512 x 512, and one image at a time. Otherwise it crashes. Im going to make a script to get a log from the crashes, from termux.
Points so far: Difficult to access the generated images, trough UI, and could have a file name system to make it easier to organize instead of download each image and subscribe download.png, download(1).png, standard.
It went rogue sometimes and change the image size by itself. Ive set 512x512 and got a 512x256 sometimes.
Torch datatype : torch.float32 edsr_onnxsim_2x.onnx: 100%|█| 5.50M/5.50M [00:02<00:00, 2024-08-20 23:09:14.896289509 [E:onnxruntime:Default, env.cc:254 ThreadMain] pthread_setaffinity_np failed for thread: 19126, index: 1, mask: {5, }, error code: 22 error msg: Invalid argument. Specify the number of threads explicitly so the affinity is not set.
Got this when upscale
Hey, congrats on the project. Ive been able to run it at an ASUS ROG 6 PRO.
Still tunning, but it worked.
When ive changed from LCM-Lora to LCM it start downloading the model, and diffusion pytorch again. They're different or the path is set to different folders?