Open kalechinees opened 1 year ago
How did you manage to start the image generation? when i try it returns
eprint(line:60) :: Error when calling Cognitive Face API:
status_code: 401
code: 401
message: Access denied due to invalid subscription key or wrong API endpoint. Make sure to provide a valid key for an active subscription and use a correct regional API endpoint for your resource.
I assume the original issue is because of library version incompatibility. Make sure you have the same versions specified in requirements.txt
.
How did you manage to start the image generation? when i try it returns
You need to enter your huggingface token when asked (after accepting the license agreement in stable diffusion model page).
this still can be reproduced by installing from requirements.txt into venv, tested on python 3.9.0 and python 3.11.0, the diffusers package is exactly as specified in txt.
Installing transformers==4.22.1
which is specified in UnstableFusionServer.ipynb
(requirements.txt
suggests newer version via >=
) seem to solve the issue
Not too sure why this is happening. Everything installed accordingly but the "Generate" fetches 15 files, the GPU spins up and then I get the log below. Both stable-diffusion-v1-4 and 1-5 have been cloned through Huggingface.co and User Access token is pasted in the application. Do I need to edit something to point towards the .ckpt model of Stable-Diffusion 1.4?
Fetching 15 files: 100%|█████████████████████████████████████████████████████████████| 15/15 [00:00<00:00, 7505.91it/s] The config attributes {'clip_sample': False} were passed to PNDMScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file. Traceback (most recent call last): File "C:\AI\UnstableFusion\unstablefusion.py", line 897, in handle_generate_button if type(self.get_handler()) == ServerStableDiffusionHandler: File "C:\AI\UnstableFusion\unstablefusion.py", line 460, in get_handler return self.stable_diffusion_manager.get_handler() File "C:\AI\UnstableFusion\unstablefusion.py", line 329, in get_handler return self.get_local_handler(self.get_huggingface_token()) File "C:\AI\UnstableFusion\unstablefusion.py", line 312, in get_local_handler self.cached_local_handler = StableDiffusionHandler(token) File "C:\AI\UnstableFusion\diffusionserver.py", line 36, in init self.text2img = StableDiffusionPipeline.from_pretrained( File "C:\Users\Jeff\AppData\Local\Programs\Python\Python310\lib\site-packages\diffusers\pipeline_utils.py", line 516, in from_pretrained raise ValueError( ValueError: The component <class 'transformers.models.clip.image_processing_clip.CLIPImageProcessor'> of <class 'diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline'> cannot be loaded as it does not seem to have any of the loading methods defined in {'ModelMixin': ['save_pretrained', 'from_pretrained'], 'SchedulerMixin': ['save_config', 'from_config'], 'DiffusionPipeline': ['save_pretrained', 'from_pretrained'], 'OnnxRuntimeModel': ['save_pretrained', 'from_pretrained'], 'PreTrainedTokenizer': ['save_pretrained', 'from_pretrained'], 'PreTrainedTokenizerFast': ['save_pretrained', 'from_pretrained'], 'PreTrainedModel': ['save_pretrained', 'from_pretrained'], 'FeatureExtractionMixin': ['save_pretrained', 'from_pretrained']}.