vladmandic / automatic

SD.Next: Advanced Implementation of Stable Diffusion and other Diffusion-based generative image models
https://github.com/vladmandic/automatic
GNU Affero General Public License v3.0
5.72k stars 425 forks source link

[Issue]: running SD3 #3260

Closed kalle07 closed 5 months ago

kalle07 commented 5 months ago

Issue Description

iv read the wiki i logged into huggin i accepted license iv downloaded the sd3_medium.safetensors i choose in Text encoder model t5 FP8 i get a HuggingFace token and paste it

choose the model

error: " OSError: We couldn't connect to 'https://huggingface.co/' to load this file, couldn't find it in the cached files and it looks like stabilityai/stable-diffusion-3-medium-diffusers is not the path to a directory containing a file named text_encoder\config.json. Checkout your internet connection or see how to run the library in offline mode at 'https://huggingface.co/docs/transformers/installation#offline-mode'. 18:06:59-161452 INFO Startup time: 3898.51 ldm=3893.33 extensions=0.10 ui-en=0.16 ui-txt2img=0.05 ui-img2img=0.08 ui-control=0.12 ui-settings=0.23 ui-extensions=0.89 ui-defaults=0.05 launch=0.33 app-started=0.17 checkpoint=2.82 18:06:59-260188 INFO MOTD: N/A 18:07:04-012481 INFO Browser session: user=None client=127.0.0.1 agent=Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:127.0) Gecko/20100101 Firefox/127.0 18:07:16-253721 INFO Select: model="sd3_medium" 18:07:16-269371 INFO Autodetect: model="Stable Diffusion 3" class=StableDiffusion3Pipeline file="e:\automatic\models\Stable-diffusion\sd3_medium.safetensors" size=4137MB 18:07:16-870056 ERROR Diffusers failed loading: model=e:\automatic\models\Stable-diffusion\sd3_medium.safetensors pipeline=Autodetect/NoneType config={'low_cpu_mem_usage': True, 'torch_dtype': torch.float16, 'load_connected_pipeline': True, 'safety_checker': None, 'requires_safety_checker': False, 'local_files_only': False, 'extract_ema': False, 'config': 'configs/sd3'} We couldn't connect to 'https://huggingface.co/' to load this file, couldn't find it in the cached files and it looks like stabilityai/stable-diffusion-3-medium-diffusers is not the path to a directory containing a file named text_encoder\config.json. Checkout your internet connection or see how to run the library in offline mode at 'https://huggingface.co/docs/transformers/installation#offline-mode'. 18:07:16-885672 ERROR loading model=e:\automatic\models\Stable-diffusion\sd3_medium.safetensors pipeline=Autodetect/NoneType: OSError ┌────────────────────────────────────────────────────────────────── Traceback (most recent call last) ───────────────────────────────────────────────────────────────────┐ │ e:\automatic\venv\lib\site-packages\huggingface_hub\utils_errors.py:304 in hf_raise_for_status │ │ │ │ 303 │ try: │ │ > 304 │ │ response.raise_for_status() │ │ 305 │ except HTTPError as e: │ │ │ │ e:\automatic\venv\lib\site-packages\requests\models.py:1024 in raise_for_status │ │ │ │ 1023 │ def close(self): │ │ > 1024 │ │ """Releases the connection back to the pool. Once this method has been │ │ 1025 │ │ called the underlying raw object must not be accessed again. │ └────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘ HTTPError: 403 Client Error: Forbidden for url: https://huggingface.co/stabilityai/stable-diffusion-3-medium-diffusers/resolve/main/text_encoder/config.json

The above exception was the direct cause of the following exception:

┌────────────────────────────────────────────────────────────────── Traceback (most recent call last) ───────────────────────────────────────────────────────────────────┐ │ e:\automatic\venv\lib\site-packages\huggingface_hub\file_download.py:1722 in _get_metadata_or_catch_error │ │ │ │ 1721 │ │ │ try: │ │ > 1722 │ │ │ │ metadata = get_hf_file_metadata(url=url, proxies=proxies, timeout=etag_timeout, headers=headers) │ │ 1723 │ │ │ except EntryNotFoundError as http_error: │ │ │ │ e:\automatic\venv\lib\site-packages\huggingface_hub\utils_validators.py:114 in _inner_fn │ │ │ │ 113 │ │ │ │ > 114 │ │ return fn(*args, kwargs) │ │ 115 │ │ │ │ e:\automatic\venv\lib\site-packages\huggingface_hub\file_download.py:1645 in get_hf_file_metadata │ │ │ │ 1644 │ # Retrieve metadata │ │ > 1645 │ r = _request_wrapper( │ │ 1646 │ │ method="HEAD", │ │ │ │ e:\automatic\venv\lib\site-packages\huggingface_hub\file_download.py:372 in _request_wrapper │ │ │ │ 371 │ if follow_relative_redirects: │ │ > 372 │ │ response = _request_wrapper( │ │ 373 │ │ │ method=method, │ │ │ │ e:\automatic\venv\lib\site-packages\huggingface_hub\file_download.py:396 in _request_wrapper │ │ │ │ 395 │ response = get_session().request(method=method, url=url, params) │ │ > 396 │ hf_raise_for_status(response) │ │ 397 │ return response │ │ │ │ e:\automatic\venv\lib\site-packages\huggingface_hub\utils_errors.py:367 in hf_raise_for_status │ │ │ │ 366 │ │ │ ) │ │ > 367 │ │ │ raise HfHubHTTPError(message, response=response) from e │ │ 368 │ └────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘ HfHubHTTPError: (Request ID: Root=1-66705ef2-478a446f48b4506e3c82049c;6639e7fa-1b88-4845-9615-8fb99337ab16)

403 Forbidden: Please enable access to public gated repositories in your fine-grained token settings to view this repository.. Cannot access content at: https://huggingface.co/stabilityai/stable-diffusion-3-medium-diffusers/resolve/main/text_encoder/config.json. If you are trying to create or update content,make sure you have a token with the write role.

The above exception was the direct cause of the following exception:

┌────────────────────────────────────────────────────────────────── Traceback (most recent call last) ───────────────────────────────────────────────────────────────────┐ │ e:\automatic\venv\lib\site-packages\transformers\utils\hub.py:399 in cached_file │ │ │ │ 398 │ │ # Load from URL or cache if already cached │ │ > 399 │ │ resolved_file = hf_hub_download( │ │ 400 │ │ │ path_or_repo_id, │ │ │ │ e:\automatic\venv\lib\site-packages\huggingface_hub\utils_validators.py:114 in _inner_fn │ │ │ │ 113 │ │ │ │ > 114 │ │ return fn(*args, **kwargs) │ │ 115 │ │ │ │ e:\automatic\venv\lib\site-packages\huggingface_hub\file_download.py:1221 in hf_hub_download │ │ │ │ 1220 │ else: │ │ > 1221 │ │ return _hf_hub_download_to_cache_dir( │ │ 1222 │ │ │ # Destination │ │ │ │ e:\automatic\venv\lib\site-packages\huggingface_hub\file_download.py:1325 in _hf_hub_download_to_cache_dir │ │ │ │ 1324 │ │ # Otherwise, raise appropriate error │ │ > 1325 │ │ _raise_on_head_call_error(head_call_error, force_download, local_files_only) │ │ 1326 │ │ │ │ e:\automatic\venv\lib\site-packages\huggingface_hub\file_download.py:1826 in _raise_on_head_call_error │ │ │ │ 1825 │ │ # Otherwise: most likely a connection issue or Hub downtime => let's warn the user │ │ > 1826 │ │ raise LocalEntryNotFoundError( │ │ 1827 │ │ │ "An error happened while trying to locate the file on the Hub and we cannot find the requested files" │ └────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘ LocalEntryNotFoundError: An error happened while trying to locate the file on the Hub and we cannot find the requested files in the local cache. Please check your connection and try again or make sure your Internet connection is on.

The above exception was the direct cause of the following exception:

┌────────────────────────────────────────────────────────────────── Traceback (most recent call last) ───────────────────────────────────────────────────────────────────┐ │ E:\automatic\modules\sd_models.py:1088 in load_diffuser │ │ │ │ 1087 │ │ │ │ │ from modules.model_sd3 import load_sd3 │ │ > 1088 │ │ │ │ │ sd_model = load_sd3(fn=checkpoint_info.path, cache_dir=shared.opts.diffusers_dir, config=diffusers_load_config.get('config', None)) │ │ 1089 │ │ │ │ elif hasattr(pipeline, 'from_single_file'): │ │ │ │ E:\automatic\modules\model_sd3.py:41 in load_sd3 │ │ │ │ 40 │ │ kwargs = { │ │ > 41 │ │ │ 'text_encoder': transformers.CLIPTextModelWithProjection.from_pretrained( │ │ 42 │ │ │ │ repo_id, │ │ │ │ e:\automatic\venv\lib\site-packages\transformers\modeling_utils.py:3158 in from_pretrained │ │ │ │ 3157 │ │ │ config_path = config if config is not None else pretrained_model_name_or_path │ │ > 3158 │ │ │ config, model_kwargs = cls.config_class.from_pretrained( │ │ 3159 │ │ │ │ config_path, │ │ │ │ e:\automatic\venv\lib\site-packages\transformers\models\clip\configuration_clip.py:137 in from_pretrained │ │ │ │ 136 │ │ │ │ > 137 │ │ config_dict, kwargs = cls.get_config_dict(pretrained_model_name_or_path, kwargs) │ │ 138 │ │ │ │ e:\automatic\venv\lib\site-packages\transformers\configuration_utils.py:632 in get_config_dict │ │ │ │ 631 │ │ # Get config dict associated with the base config file │ │ > 632 │ │ config_dict, kwargs = cls._get_config_dict(pretrained_model_name_or_path, kwargs) │ │ 633 │ │ if "_commit_hash" in config_dict: │ │ │ │ e:\automatic\venv\lib\site-packages\transformers\configuration_utils.py:689 in _get_config_dict │ │ │ │ 688 │ │ │ │ # Load from local folder or from cache or download from model Hub and cache │ │ > 689 │ │ │ │ resolved_config_file = cached_file( │ │ 690 │ │ │ │ │ pretrained_model_name_or_path, │ │ │ │ e:\automatic\venv\lib\site-packages\transformers\utils\hub.py:442 in cached_file │ │ │ │ 441 │ │ │ return resolved_file │ │ > 442 │ │ raise EnvironmentError( │ │ 443 │ │ │ f"We couldn't connect to '{HUGGINGFACE_CO_RESOLVE_ENDPOINT}' to load this file, couldn't find it in the" │ └────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘ OSError: We couldn't connect to 'https://huggingface.co/' to load this file, couldn't find it in the cached files and it looks like stabilityai/stable-diffusion-3-medium-diffusers is not the path to a directory containing a file named text_encoder\config.json. Checkout your internet connection or see how to run the library in offline mode at 'https://huggingface.co/docs/transformers/installation#offline-mode'. 18:07:18-011922 WARNING Model not loaded "

Version Platform Description

win10 rtx4060

09:39:25-425086 INFO Starting SD.Next 09:39:25-425086 INFO Logger: file="e:\automatic\sdnext.log" level=INFO size=163750 mode=append 09:39:25-425086 INFO Python version=3.10.11 platform=Windows bin="e:\automatic\venv\Scripts\python.exe" venv="e:\automatic\venv" 09:39:25-864012 INFO Version: app=sd.next updated=2024-06-13 hash=a3ffd478 branch=master url=https://github.com/vladmandic/automatic/tree/master ui=main 09:39:26-797171 INFO Platform: arch=AMD64 cpu=Intel64 Family 6 Model 165 Stepping 5, GenuineIntel system=Windows release=Windows-10-10.0.19045-SP0 python=3.10.11

Relevant log output

No response

Backend

Diffusers

Branch

Master

Model

Other

Acknowledgements

vladmandic commented 5 months ago

403 Forbidden: Please enable access to public gated repositories in your fine-grained token settings to view this repository..

what are the permissions you assigned to your token when creating it on huggingface? it should NOT be a granular token, but a simple token with read permissions.

kalle07 commented 5 months ago

thy for fast answer ... nothing, i leave all standard... what should i enable? and only be shure, what is exactly this "access", what is transmitted if i use the SD3 model?

vladmandic commented 5 months ago

create new token and select "type -> read". sd3_medium.safetensors does not have text encoders at all, so it has to load TE1 and TE2 as mandatory and TE3 as optional.

kalle07 commented 5 months ago

done ... i only must copy/paste the token and press apply?

grafik

seams not working

" OSError: You are trying to access a gated repo. Make sure to have access to it at https://huggingface.co/stabilityai/stable-diffusion-3-medium-diffusers. 401 Client Error. (Request ID: Root=1-66719c3e-044189a4615348a300d356a7;11cb5d64-f51c-40f6-a4a6-aa18db09a8ca)

Cannot access gated repo for url https://huggingface.co/stabilityai/stable-diffusion-3-medium-diffusers/resolve/main/text_encoder/config.json. Access to model stabilityai/stable-diffusion-3-medium-diffusers is restricted. You must be authenticated to access it. 16:41:07-126593 WARNING Model not loaded 16:43:09-692696 INFO Select: model="sd3_medium" 16:43:09-708290 INFO Autodetect: model="Stable Diffusion 3" class=StableDiffusion3Pipeline file="e:\automatic\models\Stable-diffusion\sd3_medium.safetensors" size=4137MB 16:43:10-293653 ERROR Diffusers failed loading: model=e:\automatic\models\Stable-diffusion\sd3_medium.safetensors pipeline=Autodetect/NoneType config={'low_cpu_mem_usage': True, 'torch_dtype': torch.float16, 'load_connected_pipeline': True, 'safety_checker': None, 'requires_safety_checker': False, 'local_files_only': False, 'extract_ema': False, 'config': 'configs/sd3'} You are trying to access a gated repo. Make sure to have access to it at https://huggingface.co/stabilityai/stable-diffusion-3-medium-diffusers. 401 Client Error. (Request ID: Root=1-66719cba-204c5645210040ec578b81be;237c61e4-525a-4d63-8685-c663f5e7c0e9)

                     Cannot access gated repo for url
                     https://huggingface.co/stabilityai/stable-diffusion-3-medium-diffusers/resolve/main/text_encod
                     er/config.json.
                     Access to model stabilityai/stable-diffusion-3-medium-diffusers is restricted. You must be
                     authenticated to access it.

16:43:10-309282 ERROR loading model=e:\automatic\models\Stable-diffusion\sd3_medium.safetensors pipeline=Autodetect/NoneType: OSError ┌───────────────────────────────────────── Traceback (most recent call last) ─────────────────────────────────────────┐ │ e:\automatic\venv\lib\site-packages\huggingface_hub\utils_errors.py:304 in hf_raise_for_status │ │ │ │ 303 │ try: │ │ > 304 │ │ response.raise_for_status() │ │ 305 │ except HTTPError as e: │ │ │ │ e:\automatic\venv\lib\site-packages\requests\models.py:1021 in raise_for_status │ │ │ │ 1020 │ │ if http_error_msg: │ │ > 1021 │ │ │ raise HTTPError(http_error_msg, response=self) │ │ 1022 │ └─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘ HTTPError: 401 Client Error: Unauthorized for url: https://huggingface.co/stabilityai/stable-diffusion-3-medium-diffusers/resolve/main/text_encoder/config.json

The above exception was the direct cause of the following exception:

┌───────────────────────────────────────── Traceback (most recent call last) ─────────────────────────────────────────┐ │ e:\automatic\venv\lib\site-packages\transformers\utils\hub.py:399 in cached_file │ │ │ │ 398 │ │ # Load from URL or cache if already cached │ │ > 399 │ │ resolved_file = hf_hub_download( │ │ 400 │ │ │ path_or_repo_id, │ │ │ │ e:\automatic\venv\lib\site-packages\huggingface_hub\utils_validators.py:114 in _inner_fn │ │ │ │ 113 │ │ │ │ > 114 │ │ return fn(*args, *kwargs) │ │ 115 │ │ │ │ e:\automatic\venv\lib\site-packages\huggingface_hub\file_download.py:1221 in hf_hub_download │ │ │ │ 1220 │ else: │ │ > 1221 │ │ return _hf_hub_download_to_cache_dir( │ │ 1222 │ │ │ # Destination │ │ │ │ e:\automatic\venv\lib\site-packages\huggingface_hub\file_download.py:1325 in _hf_hub_download_to_cache_dir │ │ │ │ 1324 │ │ # Otherwise, raise appropriate error │ │ > 1325 │ │ _raise_on_head_call_error(head_call_error, force_download, local_files_only) │ │ 1326 │ │ │ │ e:\automatic\venv\lib\site-packages\huggingface_hub\file_download.py:1823 in _raise_on_head_call_error │ │ │ │ 1822 │ │ # Repo not found or gated => let's raise the actual error │ │ > 1823 │ │ raise head_call_error │ │ 1824 │ else: │ │ │ │ ... 1 frames hidden ... │ │ │ │ e:\automatic\venv\lib\site-packages\huggingface_hub\utils_validators.py:114 in _inner_fn │ │ │ │ 113 │ │ │ │ > 114 │ │ return fn(args, kwargs) │ │ 115 │ │ │ │ e:\automatic\venv\lib\site-packages\huggingface_hub\file_download.py:1645 in get_hf_file_metadata │ │ │ │ 1644 │ # Retrieve metadata │ │ > 1645 │ r = _request_wrapper( │ │ 1646 │ │ method="HEAD", │ │ │ │ e:\automatic\venv\lib\site-packages\huggingface_hub\file_download.py:372 in _request_wrapper │ │ │ │ 371 │ if follow_relative_redirects: │ │ > 372 │ │ response = _request_wrapper( │ │ 373 │ │ │ method=method, │ │ │ │ e:\automatic\venv\lib\site-packages\huggingface_hub\file_download.py:396 in _request_wrapper │ │ │ │ 395 │ response = get_session().request(method=method, url=url, params) │ │ > 396 │ hf_raise_for_status(response) │ │ 397 │ return response │ │ │ │ e:\automatic\venv\lib\site-packages\huggingface_hub\utils_errors.py:321 in hf_raise_for_status │ │ │ │ 320 │ │ │ ) │ │ > 321 │ │ │ raise GatedRepoError(message, response) from e │ │ 322 │ └─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘ GatedRepoError: 401 Client Error. (Request ID: Root=1-66719cba-204c5645210040ec578b81be;237c61e4-525a-4d63-8685-c663f5e7c0e9)

Cannot access gated repo for url https://huggingface.co/stabilityai/stable-diffusion-3-medium-diffusers/resolve/main/text_encoder/config.json. Access to model stabilityai/stable-diffusion-3-medium-diffusers is restricted. You must be authenticated to access it.

The above exception was the direct cause of the following exception:

┌───────────────────────────────────────── Traceback (most recent call last) ─────────────────────────────────────────┐ │ E:\automatic\modules\sd_models.py:1088 in load_diffuser │ │ │ │ 1087 │ │ │ │ │ from modules.model_sd3 import load_sd3 │ │ > 1088 │ │ │ │ │ sd_model = load_sd3(fn=checkpoint_info.path, cache_dir=shared.opts.diffusers_dir, config │ │ 1089 │ │ │ │ elif hasattr(pipeline, 'from_single_file'): │ │ │ │ E:\automatic\modules\model_sd3.py:41 in load_sd3 │ │ │ │ 40 │ │ kwargs = { │ │ > 41 │ │ │ 'text_encoder': transformers.CLIPTextModelWithProjection.from_pretrained( │ │ 42 │ │ │ │ repo_id, │ │ │ │ e:\automatic\venv\lib\site-packages\transformers\modeling_utils.py:3158 in from_pretrained │ │ │ │ 3157 │ │ │ config_path = config if config is not None else pretrained_model_name_or_path │ │ > 3158 │ │ │ config, model_kwargs = cls.config_class.from_pretrained( │ │ 3159 │ │ │ │ config_path, │ │ │ │ e:\automatic\venv\lib\site-packages\transformers\models\clip\configuration_clip.py:137 in from_pretrained │ │ │ │ 136 │ │ │ │ > 137 │ │ config_dict, kwargs = cls.get_config_dict(pretrained_model_name_or_path, kwargs) │ │ 138 │ │ │ │ e:\automatic\venv\lib\site-packages\transformers\configuration_utils.py:632 in get_config_dict │ │ │ │ 631 │ │ # Get config dict associated with the base config file │ │ > 632 │ │ config_dict, kwargs = cls._get_config_dict(pretrained_model_name_or_path, kwargs) │ │ 633 │ │ if "_commit_hash" in config_dict: │ │ │ │ e:\automatic\venv\lib\site-packages\transformers\configuration_utils.py:689 in _get_config_dict │ │ │ │ 688 │ │ │ │ # Load from local folder or from cache or download from model Hub and cache │ │ > 689 │ │ │ │ resolved_config_file = cached_file( │ │ 690 │ │ │ │ │ pretrained_model_name_or_path, │ │ │ │ e:\automatic\venv\lib\site-packages\transformers\utils\hub.py:417 in cached_file │ │ │ │ 416 │ │ │ return resolved_file │ │ > 417 │ │ raise EnvironmentError( │ │ 418 │ │ │ "You are trying to access a gated repo.\nMake sure to have access to it at " │ └─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘ OSError: You are trying to access a gated repo. Make sure to have access to it at https://huggingface.co/stabilityai/stable-diffusion-3-medium-diffusers. 401 Client Error. (Request ID: Root=1-66719cba-204c5645210040ec578b81be;237c61e4-525a-4d63-8685-c663f5e7c0e9)

Cannot access gated repo for url https://huggingface.co/stabilityai/stable-diffusion-3-medium-diffusers/resolve/main/text_encoder/config.json. Access to model stabilityai/stable-diffusion-3-medium-diffusers is restricted. You must be authenticated to access it. 16:43:11-158396 WARNING Model not loaded "

vladmandic commented 5 months ago

you're already logged in using old key, sdnext will not try to use new key each time if you're already logged in. restart sdnext.

kalle07 commented 5 months ago

i see thats may a feature... apply new key-token ... so SDnext will use it instantly ;) i will try tomorrow

kalle07 commented 4 months ago

ok, now first time sdext sarts load some data ...

in webui i can generate an image (more or less) wothout error (SD3 only pixels) but i have error ... 17:48:26-206443 WARNING Sampler: invalid ...

as i started SDnex first time, this error not happend (i only choose some samplers in settings) its not depend on SD3, all models have this error, i choose DPM , eueler a ... everey time same error what can that be?

vladmandic commented 4 months ago

that's a silly warning about hr sampler not being set and you're not even using hr, can be ignored. its been fixed in dev branch.

kalle07 commented 4 months ago

i see... (but teh warning not appear first time... maybe somme settings i made was wrong?)

ok .. to SD3

... ist that a new error?

" 18:19:54-966330 INFO High memory utilization: GPU=90% RAM=4% {'ram': {'used': 2.35, 'total': 63.89}, 'gpu': {'used': 14.43, 'total': 16.0}, 'retries': 0, 'oom': 0} 18:19:55-150454 WARNING Sampler: invalid 18:19:55-158497 INFO High memory utilization: GPU=90% RAM=4% {'ram': {'used': 2.35, 'total': 63.89}, 'gpu': {'used': 14.44, 'total': 16.0}, 'retries': 0, 'oom': 0} 18:19:55-346883 INFO High memory utilization: GPU=92% RAM=4% {'ram': {'used': 2.35, 'total': 63.89}, 'gpu': {'used': 14.65, 'total': 16.0}, 'retries': 0, 'oom': 0} 18:19:55-531804 INFO Base: class=StableDiffusion3Pipeline 18:19:55-547398 INFO High memory utilization: GPU=90% RAM=4% {'ram': {'used': 2.35, 'total': 63.89}, 'gpu': {'used': 14.43, 'total': 16.0}, 'retries': 0, 'oom': 0} Progress 2.47it/s █████████████████████████████████ 100% 45/45 00:18 00:00 Base 18:20:15-250831 INFO Downloading TAESD decoder: models\TAESD\taesd_decoder.pth 100%|██████████████████████████████████████████████████████████████████████████████████████████████| 4.69M/4.69M [00:00<00:00, 7.16MB/s] 18:20:16-917547 ERROR VAE decode taesd: Given groups=1, weight of size [64, 4, 3, 3], expected input[1, 16, 128, 96] to have 4 channels, but got 16 channels instead 18:20:16-933168 ERROR Exception: The expanded size of the tensor (768) must match the existing size (96) at non-singleton dimension

  1. Target sizes: [3, 1024, 768]. Tensor sizes: [16, 128, 96] 18:20:16-933168 ERROR Arguments: args=('task(0oaenyo4nr6fcgb)', 'portrait photo of a woman', '', [], 45, 2, None, False, False, False, False, 1, 1, 6, 6, 0.7, 0, 0.5, 1, 1, -1.0, -1.0, 0, 0, 0, 1024, 768, False, 0.3, 2, 'None', False, 20, 0, 0, 10, 0, '', '', 0, 0, 0, 0, False, 4, 0.95, False, 0.6, 1, '#000000', 0, [], 0, 1, 'None', 'None', 'None', 'None', 0.5, 0.5, 0.5, 0.5, None, None, None, None, 0, 0, 0, 0, 1, 1, 1, 1, None, None, None, None, False, '', 'None', 16, 'None', 1, True, 'None', 2, True, 1, 0, True, 'none', 3, 4, 0.25, 0.25, False, False, {'ad_model': 'face_yolov8n.pt', 'ad_model_classes': '', 'ad_tab_enable': True, 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'Default', 'ad_scheduler': 'Use same scheduler', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, {'ad_model': 'None', 'ad_model_classes': '', 'ad_tab_enable': True, 'ad_prompt': '', 'ad_negative_prompt': '', 'ad_confidence': 0.3, 'ad_mask_k_largest': 0, 'ad_mask_min_ratio': 0, 'ad_mask_max_ratio': 1, 'ad_x_offset': 0, 'ad_y_offset': 0, 'ad_dilate_erode': 4, 'ad_mask_merge_invert': 'None', 'ad_mask_blur': 4, 'ad_denoising_strength': 0.4, 'ad_inpaint_only_masked': True, 'ad_inpaint_only_masked_padding': 32, 'ad_use_inpaint_width_height': False, 'ad_inpaint_width': 512, 'ad_inpaint_height': 512, 'ad_use_steps': False, 'ad_steps': 28, 'ad_use_cfg_scale': False, 'ad_cfg_scale': 7, 'ad_use_checkpoint': False, 'ad_checkpoint': 'Use same checkpoint', 'ad_use_vae': False, 'ad_vae': 'Use same VAE', 'ad_use_sampler': False, 'ad_sampler': 'Default', 'ad_scheduler': 'Use same scheduler', 'ad_use_noise_multiplier': False, 'ad_noise_multiplier': 1, 'ad_use_clip_skip': False, 'ad_clip_skip': 1, 'ad_restore_face': False, 'ad_controlnet_model': 'None', 'ad_controlnet_module': 'None', 'ad_controlnet_weight': 1, 'ad_controlnet_guidance_start': 0, 'ad_controlnet_guidance_end': 1, 'is_api': ()}, 3, 1, 1, 0.8, 8, 64, True, True, 0.5, 600.0, 1.0, 1, 1, 0.5, 0.5, 'OpenGVLab/InternVL-14B-224px', False, False, 'positive', 'comma', 0, False, False, '', 'None', '', 1, '', 'None', 1, True, 10, 'None', True, 0, 'None', 2, True, 1, 0, 0, '', [], 0, '', [], 0, '', [], False, True, False, False, False, False, 0, 'None', [], 'FaceID Base', True, True, 1, 1, 1, 0.5, False, 'person', 1, 0.5, True) kwargs={} 18:20:16-970970 ERROR gradio call: RuntimeError ┌───────────────────────────────────────────────── Traceback (most recent call last) ──────────────────────────────────────────────────┐ │ E:\automatic\modules\call_queue.py:31 in f │ │ │ │ 30 │ │ │ try: │ │ > 31 │ │ │ │ res = func(*args, **kwargs) │ │ 32 │ │ │ │ progress.record_results(id_task, res) │ │ │ │ E:\automatic\modules\txt2img.py:92 in txt2img │ │ │ │ 91 │ if processed is None: │ │ > 92 │ │ processed = processing.process_images(p) │ │ 93 │ p.close() │ │ │ │ E:\automatic\modules\processing.py:192 in process_images │ │ │ │ 191 │ │ │ with context_hypertile_vae(p), context_hypertile_unet(p): │ │ > 192 │ │ │ │ processed = process_images_inner(p) │ │ 193 │ │ │ │ E:\automatic\modules\processing.py:312 in process_images_inner │ │ │ │ 311 │ │ │ │ │ from modules.processing_diffusers import process_diffusers │ │ > 312 │ │ │ │ │ x_samples_ddim = process_diffusers(p) │ │ 313 │ │ │ │ else: │ │ │ │ E:\automatic\modules\processing_diffusers.py:316 in process_diffusers │ │ │ │ 315 │ │ │ elif hasattr(shared.sd_model, "vae") and output.images is not None and len(output.images) > 0: │ │ > 316 │ │ │ │ results = processing_vae.vae_decode(latents=output.images, model=shared.sd_model, full_quality=p.full_quality) │ │ 317 │ │ │ elif hasattr(output, 'images'): │ │ │ │ E:\automatic\modules\processing_vae.py:130 in vae_decode │ │ │ │ 129 │ else: │ │ > 130 │ │ decoded = taesd_vae_decode(latents=latents) │ │ 131 │ # TODO validate decoded sample diffusers │ │ │ │ E:\automatic\modules\processing_vae.py:97 in taesd_vae_decode │ │ │ │ 96 │ │ for i in range(latents.shape[0]): │ │ > 97 │ │ │ decoded[i] = sd_vae_taesd.decode(latents[i]) │ │ 98 │ else: │ └──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘ RuntimeError: The expanded size of the tensor (768) must match the existing size (96) at non-singleton dimension 2. Target sizes: [3, 1024, 768]. Tensor sizes: [16, 128, 96] 18:20:17-133775 INFO High memory utilization: GPU=92% RAM=4% {'ram': {'used': 2.3, 'total': 63.89}, 'gpu': {'used': 14.67, 'total': 16.0}, 'retries': 0, 'oom': 0} "
vladmandic commented 4 months ago

support for taesd live preview for sd3 was added in dev branch. either disable preview until dev is merged to master (soon) or switch to dev branch.

kalle07 commented 4 months ago

but ill never get an image in the end ...

dev ok ... its added maybe in one month in main ?

thx

vladmandic commented 4 months ago

if it hard-fails, it hard fails everything. disable preview when using sd3 or switch to dev. merge and release is likely within a week.

haldi4803 commented 4 months ago

Okay.... Read the Wiki: https://github.com/vladmandic/automatic/wiki/Diffusers

Note that access to some models is gated In which case, you need to accept model EULA and provide your huggingface token

Am i Stupid? Where do i accept Eula or where do i enter the Token? I only found the Models->Huggingface->Huggingface Token but can't download anything there with my Read token generated by Huggingface...

File "C:\SDNext\automatic\venv\lib\site-packages\huggingface_hub\_login.py", line 307, in _login raise ValueError("Invalid token passed!") ValueError: Invalid token passed!

Edit -.- System->Settings->Diffuser Settings -> HUggingFace token