rupeshs / fastsdcpu

Fast stable diffusion on CPU
MIT License
1.51k stars 123 forks source link

v1.0.0 beta 16 realtime page encounteropenvino gpu error #73

Closed taotaow closed 1 year ago

taotaow commented 1 year ago

Running on Windows platform OS: Windows-10-10.0.22621-SP0 Processor: Intel64 Family 6 Model 165 Stepping 3, GenuineIntel Using device : gpu Found 7 stable diffusion models in config/stable-diffusion-models.txt Found 3 LCM-LoRA models in config/lcm-lora-models.txt Found 2 OpenVINO LCM models in config/openvino-lcm-models.txt Torch datatype : torch.float32 Starting realtime text to image(EXPERIMENTAL) Running on local URL: http://127.0.0.1:7860

To create a public link, set share=True in launch(). E:\fastsdcpu\env\lib\site-packages\transformers\models\clip\feature_extraction_clip.py:28: FutureWarning: The class CLIPFeatureExtractor is deprecated and will be removed in version 5 of Transformers. Please use CLIPImageProcessor instead. warnings.warn( The config attributes {'algorithm_type': 'deis', 'lower_order_final': True, 'skip_prk_steps': True, 'solver_order': 2, 'solver_type': 'logrho', 'use_karras_sigmas': False} were passed to LCMScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file. Compiling the vae_decoder to GPU ... Compiling the unet to GPU ... Compiling the vae_encoder to GPU ... Compiling the text_encoder to GPU ... Model :rupeshs/LCM-dreamshaper-v7-openvino Pipeline : OVStableDiffusionPipeline { "_class_name": "OVStableDiffusionPipeline", "_diffusers_version": "0.23.0", "_name_or_path": "rupeshs/LCM-dreamshaper-7", "feature_extractor": [ "transformers", "CLIPFeatureExtractor" ], "requires_safety_checker": true, "safety_checker": [ "stable_diffusion", "StableDiffusionSafetyChecker" ], "scheduler": [ "diffusers", "LCMScheduler" ], "text_encoder": [ "optimum", "OVModelTextEncoder" ], "text_encoder_2": [ null, null ], "tokenizer": [ "transformers", "CLIPTokenizer" ], "unet": [ "optimum", "OVModelUnet" ], "vae_decoder": [ "optimum", "OVModelVaeDecoder" ], "vae_encoder": [ "optimum", "OVModelVaeEncoder" ] }

The config attributes {'algorithm_type': 'deis', 'lower_order_final': True, 'skip_prk_steps': True, 'solver_order': 2, 'solver_type': 'logrho', 'use_karras_sigmas': False} were passed to LCMScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file. Using OpenVINO E:\fastsdcpu\env\lib\site-packages\optimum\intel\openvino\modeling_diffusion.py:565: FutureWarning: shared_memory is deprecated and will be removed in 2024.0. Value of shared_memory is going to override share_inputs value. Please use only share_inputs explicitly. outputs = self.request(inputs, shared_memory=True) 0%| | 0/4 [00:00<?, ?it/s]E:\fastsdcpu\env\lib\site-packages\optimum\intel\openvino\modeling_diffusion.py:599: FutureWarning: shared_memory is deprecated and will be removed in 2024.0. Value of shared_memory is going to override share_inputs value. Please use only share_inputs explicitly. outputs = self.request(inputs, shared_memory=True) 0%| | 0/4 [00:21<?, ?it/s] Traceback (most recent call last): File "E:\fastsdcpu\env\lib\site-packages\gradio\routes.py", line 442, in run_predict output = await app.get_blocks().process_api( File "E:\fastsdcpu\env\lib\site-packages\gradio\blocks.py", line 1392, in process_api result = await self.call_function( File "E:\fastsdcpu\env\lib\site-packages\gradio\blocks.py", line 1097, in call_function prediction = await anyio.to_thread.run_sync( File "E:\fastsdcpu\env\lib\site-packages\anyio\to_thread.py", line 33, in run_sync return await get_asynclib().run_sync_in_worker_thread( File "E:\fastsdcpu\env\lib\site-packages\anyio_backends_asyncio.py", line 877, in run_sync_in_worker_thread return await future File "E:\fastsdcpu\env\lib\site-packages\anyio_backends_asyncio.py", line 807, in run result = context.run(func, args) File "E:\fastsdcpu\env\lib\site-packages\gradio\utils.py", line 703, in wrapper response = f(args, **kwargs) File "E:\fastsdcpu\src\frontend\webui\realtime_ui.py", line 55, in predict images = lcm_text_to_image.generate(lcm_diffusion_setting) File "E:\fastsdcpu\src\backend\lcm_text_to_image.py", line 307, in generate result_images = self.pipeline( File "E:\fastsdcpu\env\lib\site-packages\optimum\intel\openvino\modeling_diffusion.py", line 687, in call return StableDiffusionPipelineMixin.call( File "E:\fastsdcpu\env\lib\site-packages\optimum\pipelines\diffusers\pipeline_stable_diffusion.py", line 357, in call noise_pred = self.unet(sample=latent_model_input, timestep=timestep, encoder_hidden_states=prompt_embeds) File "E:\fastsdcpu\env\lib\site-packages\optimum\intel\openvino\modeling_diffusion.py", line 599, in call outputs = self.request(inputs, shared_memory=True) File "E:\fastsdcpu\env\lib\site-packages\openvino\runtime\ie_api.py", line 384, in call return self._infer_request.infer( File "E:\fastsdcpu\env\lib\site-packages\openvino\runtime\ie_api.py", line 143, in infer return OVDict(super().infer(_data_dispatch( RuntimeError: Exception from src\inference\src\infer_request.cpp:231: [GPU] clEnqueueNDRangeKernel, error code: -54

rupeshs commented 1 year ago

Please use CPU for realtime.