Closed milen-prg closed 1 year ago
@milen-prg seems like the latest diffuser release is breaking it. Will fix it.
this helped me pip install --upgrade diffusers typing-extensions==4.8.0
but then, I got another error, macOS, Intel machine
source env/bin/activate
python3 src/app.py --gui
usage: app.py [-h] [-s] [-g | -w | -v] [--lcm_model_id LCM_MODEL_ID] [--prompt PROMPT] [--image_height IMAGE_HEIGHT] [--image_width IMAGE_WIDTH] [--inference_steps INFERENCE_STEPS]
[--guidance_scale GUIDANCE_SCALE] [--number_of_images NUMBER_OF_IMAGES] [--seed SEED] [--use_openvino] [--use_offline_model] [--use_safety_checker] [-i]
[--use_tiny_auto_encoder]
FAST SD CPU v1.0.0 beta 9
options:
-h, --help show this help message and exit
-s, --share Create sharable link(Web UI)
-g, --gui Start desktop GUI
-w, --webui Start Web UI
-v, --version Version
--lcm_model_id LCM_MODEL_ID
Model ID or path,Default SimianLuo/LCM_Dreamshaper_v7
--prompt PROMPT Describe the image you want to generate
--image_height IMAGE_HEIGHT
Height of the image
--image_width IMAGE_WIDTH
Width of the image
--inference_steps INFERENCE_STEPS
Number of steps,default : 4
--guidance_scale GUIDANCE_SCALE
Guidance scale,default : 8.0
--number_of_images NUMBER_OF_IMAGES
Number of images to generate ,default : 1
--seed SEED Seed,default : -1 (disabled)
--use_openvino Use OpenVINO model
--use_offline_model Use offline model
--use_safety_checker Use safety checker
-i, --interactive Interactive CLI mode
--use_tiny_auto_encoder
Use tiny auto encoder for SD (TAESD)
Running on Darwin platform
OS: macOS-11.7.10-x86_64-i386-64bit
Processor: i386
Using device : cpu
Starting desktop GUI mode(Qt)
2023-11-06 18:50:10.941 Python[39045:9872999] ApplePersistenceIgnoreState: Existing state will not be touched. New state will be written to /var/folders/wb/4kwkhlwd6ms6s4vjq8qpdw8h0000gn/T/org.python.python.savedState
Output path : /Users/dwolf/D/P/NN/vendor/fastsdcpu/results
{'guidance_scale': 8.7,
'image_height': 512,
'image_width': 512,
'inference_steps': 6,
'lcm_model_id': 'SimianLuo/LCM_Dreamshaper_v7',
'number_of_images': 1,
'prompt': 'Robert Sapolsky',
'seed': -1,
'use_offline_model': False,
'use_openvino': False,
'use_safety_checker': False,
'use_seed': False,
'use_tiny_auto_encoder': False}
Loading pipeline components...: 43%|██████████████████████████████████████████████████████ | 3/7 [00:01<00:01, 2.90it/s]Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Loading pipeline components...: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 7/7 [00:01<00:00, 4.10it/s]
/Users/dwolf/D/P/NN/vendor/fastsdcpu/env/lib/python3.11/site-packages/diffusers/pipelines/pipeline_utils.py:750: FutureWarning: `torch_dtype` is deprecated and will be removed in version 0.25.0.
deprecate("torch_dtype", "0.25.0", "")
/Users/dwolf/D/P/NN/vendor/fastsdcpu/env/lib/python3.11/site-packages/diffusers/pipelines/pipeline_utils.py:753: FutureWarning: `torch_device` is deprecated and will be removed in version 0.25.0.
deprecate("torch_device", "0.25.0", "")
Traceback (most recent call last):
File "/Users/dwolf/D/P/NN/vendor/fastsdcpu/src/frontend/gui/image_generator_worker.py", line 29, in run
result = self.fn(*self.args, **self.kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/dwolf/D/P/NN/vendor/fastsdcpu/src/frontend/gui/app_window.py", line 419, in generate_image
images = self.context.generate_text_to_image(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/dwolf/D/P/NN/vendor/fastsdcpu/src/context.py", line 34, in generate_text_to_image
images = self.lcm_text_to_image.generate(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/dwolf/D/P/NN/vendor/fastsdcpu/src/backend/lcm_text_to_image.py", line 156, in generate
result_images = self.pipeline(
^^^^^^^^^^^^^^
File "/Users/dwolf/D/P/NN/vendor/fastsdcpu/env/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/Users/dwolf/.cache/huggingface/modules/diffusers_modules/local/latent_consistency_txt2img.py", line 238, in __call__
self.scheduler.set_timesteps(num_inference_steps, lcm_origin_steps)
File "/Users/dwolf/D/P/NN/vendor/fastsdcpu/env/lib/python3.11/site-packages/diffusers/schedulers/scheduling_lcm.py", line 379, in set_timesteps
self.timesteps = torch.from_numpy(timesteps.copy()).to(device=device, dtype=torch.long)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/dwolf/D/P/NN/vendor/fastsdcpu/env/lib/python3.11/site-packages/torch/cuda/__init__.py", line 239, in _lazy_init
raise AssertionError("Torch not compiled with CUDA enabled")
AssertionError: Torch not compiled with CUDA enabled
@milen-prg ,@diimdeep Fixed in the latest release v1.0.0-beta.11 https://github.com/rupeshs/fastsdcpu/releases/tag/v1.0.0-beta.11
Thanks, btw, openvino works for me on intel mac
@rupeshs Thank you, very much for the fast support!!! Now all works properly. Can I ask 3 extra questions here (I'm beginner with the AI art):
For me, the most important is question 2. Will be very thankful to your reply.
Hey how to update to the latest release?
Windows 10 Pro 64 bit, Python 3.11 The install.bat successfully. When I start the GUI (start.bat) and try to run with simple prompt at default settings, after long downloads (successful), this error appears in the console:
Starting fastsdcpu... Python command check :OK Python version: 3.11.6 The cache for model files in Transformers v4.22.0 has been updated. Migrating your old cache. This is a one-time only operation. You can interrupt this and resume the migration later on by calling
transformers.utils.move_cache()
. 0it [00:00, ?it/s] usage: app.py [-h] [-s] [-g | -w | -v] [--lcm_model_id LCM_MODEL_ID] [--prompt PROMPT] [--image_height IMAGE_HEIGHT] [--image_width IMAGE_WIDTH] [--inference_steps INFERENCE_STEPS] [--guidance_scale GUIDANCE_SCALE] [--number_of_images NUMBER_OF_IMAGES] [--seed SEED] [--use_openvino] [--use_offline_model] [--use_safety_checker] [-i] [--use_tiny_auto_encoder]FAST SD CPU v1.0.0 beta 9
options: -h, --help show this help message and exit -s, --share Create sharable link(Web UI) -g, --gui Start desktop GUI -w, --webui Start Web UI -v, --version Version --lcm_model_id LCM_MODEL_ID Model ID or path,Default SimianLuo/LCM_Dreamshaper_v7 --prompt PROMPT Describe the image you want to generate --image_height IMAGE_HEIGHT Height of the image --image_width IMAGE_WIDTH Width of the image --inference_steps INFERENCE_STEPS Number of steps,default : 4 --guidance_scale GUIDANCE_SCALE Guidance scale,default : 8.0 --number_of_images NUMBER_OF_IMAGES Number of images to generate ,default : 1 --seed SEED Seed,default : -1 (disabled) --use_openvino Use OpenVINO model --use_offline_model Use offline model --use_safety_checker Use safety checker -i, --interactive Interactive CLI mode --use_tiny_auto_encoder Use tiny auto encoder for SD (TAESD) Running on Windows platform OS: Windows-10-10.0.19045-SP0 Processor: Intel64 Family 6 Model 69 Stepping 1, GenuineIntel Using device : cpu Starting desktop GUI mode(Qt) Output path : C:\fastsdcpu-main\results {'guidance_scale': 8.0, 'image_height': 512, 'image_width': 512, 'inference_steps': 4, 'lcm_model_id': 'SimianLuo/LCM_Dreamshaper_v7', 'number_of_images': 1, 'prompt': 'forest', 'seed': -1, 'use_offline_model': False, 'use_openvino': False, 'use_safety_checker': False, 'use_seed': False, 'use_tiny_auto_encoder': False} Downloading (…)ain/model_index.json: 100%|████████████████████████████████████████████████████| 588/588 [00:00<?, ?B/s] Downloading (…)cheduler_config.json: 100%|████████████████████████████████████████████████████| 539/539 [00:00<?, ?B/s] Downloading (…)_encoder/config.json: 100%|████████████████████████████████████████████████████| 610/610 [00:00<?, ?B/s] Downloading (…)_checker/config.json: 100%|████████████████████████████████████████████████████| 726/726 [00:00<?, ?B/s] Downloading (…)tokenizer/merges.txt: 100%|██████████████████████████████████████████| 525k/525k [00:00<00:00, 1.98MB/s] Downloading (…)rocessor_config.json: 100%|████████████████████████████████████████████████████| 518/518 [00:00<?, ?B/s] Downloading (…)902/unet/config.json: 100%|████████████████████████████████████████████████| 1.73k/1.73k [00:00<?, ?B/s] Downloading (…)cial_tokens_map.json: 100%|████████████████████████████████████████████████████| 133/133 [00:00<?, ?B/s] Downloading (…)okenizer_config.json: 100%|████████████████████████████████████████████| 765/765 [00:00<00:00, 49.0kB/s] Downloading (…)4902/vae/config.json: 100%|████████████████████████████████████████████████████| 651/651 [00:00<?, ?B/s] Downloading (…)tokenizer/vocab.json: 100%|████████████████████████████████████████| 1.06M/1.06M [00:00<00:00, 1.70MB/s] Downloading (…)ch_model.safetensors: 100%|██████████████████████████████████████████| 335M/335M [01:56<00:00, 2.86MB/s] Downloading model.safetensors: 100%|████████████████████████████████████████████████| 492M/492M [02:27<00:00, 3.35MB/s] Downloading model.safetensors: 100%|██████████████████████████████████████████████| 1.22G/1.22G [04:45<00:00, 4.26MB/s] Downloading (…)ch_model.safetensors: 100%|████████████████████████████████████████| 3.44G/3.44G [08:08<00:00, 7.04MB/s] Fetching 15 files: 100%|███████████████████████████████████████████████████████████████| 15/15 [08:11<00:00, 32.76s/it] Loading pipeline components...: 0%| | 0/7 [00:00<?, ?it/s] Traceback (most recent call last): File "C:\fastsdcpu-main\src\frontend\gui\image_generator_worker.py", line 29, in run result = self.fn(*self.args, **self.kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\fastsdcpu-main\src\frontend\gui\app_window.py", line 419, in generate_image images = self.context.generate_text_to_image( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\fastsdcpu-main\src\context.py", line 27, in generate_text_to_image self.lcm_text_to_image.init( File "C:\fastsdcpu-main\src\backend\lcm_text_to_image.py", line 106, in init self.pipeline = DiffusionPipeline.from_pretrained( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\fastsdcpu-main\env\Lib\site-packages\diffusers\pipelines\pipeline_utils.py", line 1105, in from_pretrained loaded_sub_model = load_sub_model( ^^^^^^^^^^^^^^^ File "C:\fastsdcpu-main\env\Lib\site-packages\diffusers\pipelines\pipeline_utils.py", line 391, in load_sub_model class_obj, class_candidates = get_class_obj_and_candidates( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\fastsdcpu-main\env\Lib\site-packages\diffusers\pipelines\pipeline_utils.py", line 319, in get_class_obj_and_candidates class_obj = getattr(library, class_name) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\fastsdcpu-main\env\Lib\site-packages\diffusers\utils\import_utils.py", line 677, in getattr raise AttributeError(f"module {self.name} has no attribute {name}") AttributeError: module diffusers has no attribute LCMScheduler