painebenjamin / app.enfugue.ai

ENFUGUE is an open-source web app for making studio-grade images and video using generative AI.
GNU General Public License v3.0
676 stars 64 forks source link

RuntimeError: "LayerNormKernelImpl" #110

Open AKDigitalAgency opened 10 months ago

AKDigitalAgency commented 10 months ago

Had same problem on v2.5 now I've upgraded to latest v3 but still keep getting this error:

Received error builtins.RuntimeError: "LayerNormKernelImpl" not implemented for 'Half'

In settings its selected use only "Full" So why is it even trying to use "Half"? Tried it with different models both SD1.5 & SDXL get same error :(

Here is Debug : stderr (may not be an error:) DEBUG:root:Initializing MLIR with module: _site_initialize_0 DEBUG:root:Registering dialects from initializer

DEBUG:jax._src.path:etils.epath was not found. Using pathlib for file I/O.

DEBUG:enfugue:Changing to scheduler EulerDiscreteScheduler (eds)

DEBUG:enfugue:Instruction 2 beginning task “Preparing Inference Pipeline”

DEBUG:enfugue:Inferencing on cpu, must use dtype bfloat16

DEBUG:enfugue:Initializing pipeline from checkpoint at D:\AIs\Stability Matrix\Data\Models\StableDiffusion\deliberate_v3.safetensors. Arguments are {'cache_dir': 'D:\AIs\Stability Matrix\Data\.cache\enfugue\cache', 'engine_size': '512', 'tiling_stride': '0', 'requires_safety_checker': 'False', 'torch_dtype': 'dtype', 'force_full_precision_vae': 'False', 'controlnets': {}, 'ip_adapter': 'IPAdapter', 'task_callback': 'function', 'offload_models': 'True', 'load_safety_checker': 'False'}

torchvision\io\image.py:13: UserWarning: Failed to load image Python extension: ''If you don't plan on using image functionality from torchvision.io, you can ignore this warning. Otherwise, there might be something wrong with your environment. Did you have libjpeg or libpng installed before building torchvision from source?

torch_jit_internal.py:857: UserWarning: Unable to retrieve source for @torch.jit._overload function: .

warnings.warn(

torch_jit_internal.py:857: UserWarning: Unable to retrieve source for @torch.jit._overload function: .

warnings.warn(

DEBUG:enfugue:Pipeline still initializing. Please wait.

DEBUG:enfugue:Pipeline still initializing. Please wait.

DEBUG:enfugue:Pipeline still initializing. Please wait.

DEBUG:enfugue:Pipeline still initializing. Please wait.

DEBUG:enfugue:Pipeline still initializing. Please wait.

WARNING:tensorflow:AutoGraph is not available in this environment: functions lack code information. This is typical of some environments like the interactive Python shell. See https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/autograph/g3doc/reference/limitations.md#access-to-source-code for more information.

DEBUG:tensorflow:Falling back to TensorFlow client; we recommended you install the Cloud TPU client directly with pip install cloud-tpu-client.

DEBUG:h5py._conv:Creating converter from 7 to 5

DEBUG:h5py._conv:Creating converter from 5 to 7

DEBUG:h5py._conv:Creating converter from 7 to 5

DEBUG:h5py._conv:Creating converter from 5 to 7

DEBUG:enfugue:Pipeline still initializing. Please wait.

DEBUG:enfugue:Instruction 2 beginning task “Loading checkpoint file deliberate_v3.safetensors”

INFO:enfugue:No configuration file found for checkpoint deliberate_v3, using Stable Diffusion 1.5

INFO:enfugue:Checkpoint has 4 input channels

DEBUG:enfugue:Instruction 2 beginning task “Loading UNet”

DEBUG:enfugue:Pipeline still initializing. Please wait.

DEBUG:enfugue:Pipeline still initializing. Please wait.

DEBUG:enfugue:Loading 686 keys into UNet state dict (non-strict)

DEBUG:enfugue:Pipeline still initializing. Please wait.

DEBUG:enfugue:Pipeline still initializing. Please wait.

DEBUG:enfugue:Pipeline still initializing. Please wait.

DEBUG:enfugue:Offloading enabled; sending UNet to CPU

DEBUG:enfugue:Instruction 2 beginning task “Loading Default VAE”

DEBUG:enfugue:Loading 248 keys into Autoencoder state dict (strict). Autoencoder scale is 0.18215

DEBUG:enfugue:Offloading enabled; sending VAE to CPU

DEBUG:enfugue:Instruction 2 beginning task “Loading preview VAE madebyollin/taesd”

DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): huggingface.co:443

DEBUG:http.client:send: b'HEAD /madebyollin/taesd/resolve/main/config.json HTTP/1.1\r\nHost: huggingface.co\r\nuser-agent: unknown/None; hf_hub/0.19.0; python/3.10.11; torch/2.1.0+cu118\r\nAccept-Encoding: identity\r\nAccept: /\r\nConnection: keep-alive\r\nauthorization: Bearer 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\r\nX-Amzn-Trace-Id: 0776431b-6e6e-4fb1-bb49-27e380bf6b2d\r\n\r\n'

DEBUG:http.client:reply: 'HTTP/1.1 200 OK\r\n'

DEBUG:http.client:header: Content-Type: text/plain; charset=utf-8

DEBUG:http.client:header: Content-Length: 551

DEBUG:http.client:header: Connection: keep-alive

DEBUG:http.client:header: Date: Fri, 17 Nov 2023 17:23:32 GMT

DEBUG:http.client:header: X-Powered-By: huggingface-moon

DEBUG:http.client:header: X-Request-Id: Root=1-6557a194-25b783783848cf9673df5f53;0776431b-6e6e-4fb1-bb49-27e380bf6b2d

DEBUG:http.client:header: Access-Control-Allow-Origin: https://huggingface.co

DEBUG:http.client:header: Vary: Origin

DEBUG:http.client:header: Access-Control-Expose-Headers: X-Repo-Commit,X-Request-Id,X-Error-Code,X-Error-Message,ETag,Link,Accept-Ranges,Content-Range

DEBUG:http.client:header: X-Robots-Tag: none

DEBUG:http.client:header: X-Repo-Commit: 73b7d1c8836d16997316fb94c4b36a9095442c1d

DEBUG:http.client:header: Accept-Ranges: bytes

DEBUG:http.client:header: Content-Security-Policy: default-src none; sandbox

DEBUG:http.client:header: ETag: "62f01c3eb447730d88eb61aca75331a564301e5a"

DEBUG:http.client:header: X-Cache: Miss from cloudfront

DEBUG:http.client:header: Via: 1.1 876bec0443fc8f764d98d36e203f84e0.cloudfront.net (CloudFront)

DEBUG:http.client:header: X-Amz-Cf-Pop: JFK52-P3

DEBUG:http.client:header: X-Amz-Cf-Id: jdg7EdWh3AxR7ezET2ljKF2p6YpxuZ9hdkWvEbZXRxC-3RNJPiejkw==

DEBUG:urllib3.connectionpool:https://huggingface.co:443 "HEAD /madebyollin/taesd/resolve/main/config.json HTTP/1.1" 200 0

DEBUG:http.client:send: b'HEAD /openai/clip-vit-large-patch14/resolve/main/config.json HTTP/1.1\r\nHost: huggingface.co\r\nuser-agent: unknown/None; hf_hub/0.19.0; python/3.10.11; torch/2.1.0+cu118\r\nAccept-Encoding: identity\r\nAccept: /\r\nConnection: keep-alive\r\nauthorization: Bearer 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\r\nX-Amzn-Trace-Id: bdb63051-af58-4de0-9fdf-a44cf00f1074\r\n\r\n'

DEBUG:http.client:reply: 'HTTP/1.1 200 OK\r\n'

DEBUG:http.client:header: Content-Type: text/plain; charset=utf-8

DEBUG:http.client:header: Content-Length: 4519

DEBUG:http.client:header: Connection: keep-alive

DEBUG:http.client:header: Date: Fri, 17 Nov 2023 17:23:32 GMT

DEBUG:http.client:header: X-Powered-By: huggingface-moon

DEBUG:http.client:header: X-Request-Id: Root=1-6557a194-6376dd3244dbc09173a97734;bdb63051-af58-4de0-9fdf-a44cf00f1074

DEBUG:http.client:header: Access-Control-Allow-Origin: https://huggingface.co

DEBUG:http.client:header: Vary: Origin

DEBUG:http.client:header: Access-Control-Expose-Headers: X-Repo-Commit,X-Request-Id,X-Error-Code,X-Error-Message,ETag,Link,Accept-Ranges,Content-Range

DEBUG:http.client:header: X-Repo-Commit: 32bd64288804d66eefd0ccbe215aa642df71cc41

DEBUG:http.client:header: Accept-Ranges: bytes

DEBUG:http.client:header: Content-Security-Policy: default-src none; sandbox

DEBUG:http.client:header: ETag: "2c19f6666e0e163c7954df66cb901353fcad088e"

DEBUG:http.client:header: X-Cache: Miss from cloudfront

DEBUG:http.client:header: Via: 1.1 876bec0443fc8f764d98d36e203f84e0.cloudfront.net (CloudFront)

DEBUG:http.client:header: X-Amz-Cf-Pop: JFK52-P3

DEBUG:http.client:header: X-Amz-Cf-Id: 5erFe9aUVOvy9x2QD3069ngaMI6mZlnHUl0F73OPuXEEjljTcNO5rQ==

DEBUG:urllib3.connectionpool:https://huggingface.co:443 "HEAD /openai/clip-vit-large-patch14/resolve/main/config.json HTTP/1.1" 200 0

DEBUG:enfugue:Pipeline still initializing. Please wait.

DEBUG:enfugue:Offloading enabled; sending text encoder to CPU

DEBUG:enfugue:Instruction 2 beginning task “Loading tokenizer openai/clip-vit-large-patch14”

DEBUG:http.client:send: b'HEAD /openai/clip-vit-large-patch14/resolve/main/vocab.json HTTP/1.1\r\nHost: huggingface.co\r\nuser-agent: unknown/None; hf_hub/0.19.0; python/3.10.11; torch/2.1.0+cu118\r\nAccept-Encoding: identity\r\nAccept: /\r\nConnection: keep-alive\r\nauthorization: Bearer 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\r\nX-Amzn-Trace-Id: 55f50cd2-704c-4a18-b6b8-ac9c9abd67f6\r\n\r\n'

DEBUG:http.client:reply: 'HTTP/1.1 200 OK\r\n'

DEBUG:http.client:header: Content-Type: text/plain; charset=utf-8

DEBUG:http.client:header: Content-Length: 961143

DEBUG:http.client:header: Connection: keep-alive

DEBUG:http.client:header: Date: Fri, 17 Nov 2023 17:23:40 GMT

DEBUG:http.client:header: X-Powered-By: huggingface-moon

DEBUG:http.client:header: X-Request-Id: Root=1-6557a19c-1518ba5a0e642834504b84de;55f50cd2-704c-4a18-b6b8-ac9c9abd67f6

DEBUG:http.client:header: Access-Control-Allow-Origin: https://huggingface.co

DEBUG:http.client:header: Vary: Origin

DEBUG:http.client:header: Access-Control-Expose-Headers: X-Repo-Commit,X-Request-Id,X-Error-Code,X-Error-Message,ETag,Link,Accept-Ranges,Content-Range

DEBUG:http.client:header: X-Repo-Commit: 32bd64288804d66eefd0ccbe215aa642df71cc41

DEBUG:http.client:header: Accept-Ranges: bytes

DEBUG:http.client:header: Content-Security-Policy: default-src none; sandbox

DEBUG:http.client:header: ETag: "4297ea6a8d2bae1fea8f48b45e257814dcb11f69"

DEBUG:http.client:header: X-Cache: Miss from cloudfront

DEBUG:http.client:header: Via: 1.1 876bec0443fc8f764d98d36e203f84e0.cloudfront.net (CloudFront)

DEBUG:http.client:header: X-Amz-Cf-Pop: JFK52-P3

DEBUG:http.client:header: X-Amz-Cf-Id: MOmuoWkpr2pn5cQzOqL-PLzgFoldPlfBdxmv-r2okM1vLz4xK9CHTg==

DEBUG:urllib3.connectionpool:https://huggingface.co:443 "HEAD /openai/clip-vit-large-patch14/resolve/main/vocab.json HTTP/1.1" 200 0

diffusers\pipelines\pipeline_utils.py:749: FutureWarning: torch_dtype is deprecated and will be removed in version 0.25.0.

deprecate("torch_dtype", "0.25.0", "")

DEBUG:enfugue:Pipeline still initializing. Please wait.

DEBUG:enfugue:Setting scheduler to EulerDiscreteScheduler

DEBUG:enfugue:Instruction 2 beginning task “Executing Inference”

DEBUG:enfugue:Calling pipeline with arguments {'latent_callback': 'function', 'width': '512', 'height': '512', 'strength': '0.99', 'tile': '(False, False)', 'freeu_factors': '[1.2, 1.4, 0.9, 0.2]', 'num_inference_steps': '13', 'num_results_per_prompt': '1', 'tiling_stride': '0', 'guidance_scale': '7', 'prompts': '[Prompt]', 'progress_callback': 'function', 'latent_callback_steps': '0', 'latent_callback_type': 'pil'}

DEBUG:enfugue:Calculated overall steps to be 14 - 0 image prompt embedding probe(s) + [1 chunk(s) (0 encoding step(s) + (1 decoding step(s) 1 frame(s)) + (1 temporal chunk(s) * 13 inference step(s))]

ERROR:enfugue:Received error builtins.RuntimeError: "LayerNormKernelImpl" not implemented for 'Half'

DEBUG:enfugue:Traceback (most recent call last):

File "enfugue\diffusion\process.py", line 185, in run

response["result"] = self.handle(instruction_id, instruction_action, instruction_payload)

File "enfugue\diffusion\process.py", line 250, in handle

return self.execute_diffusion_plan(

File "enfugue\diffusion\process.py", line 284, in execute_diffusion_plan

return plan.execute(

File "enfugue\diffusion\invocation.py", line 1219, in execute

images, nsfw = self.execute_inference(

File "enfugue\diffusion\invocation.py", line 1375, in execute_inference

result = pipeline(

File "enfugue\diffusion\manager.py", line 4472, in call

result = pipe( # type: ignore[assignment]

File "torch\utils_contextlib.py", line 115, in decorate_context

return func(*args, **kwargs)

File "enfugue\diffusion\pipeline.py", line 3455, in call

these_prompt_embeds = self.encode_prompt(

File "enfugue\diffusion\pipeline.py", line 1036, in encode_prompt

prompt_embeds = text_encoder(

File "torch\nn\modules\module.py", line 1518, in _wrapped_call_impl

return self._call_impl(*args, **kwargs)

File "torch\nn\modules\module.py", line 1527, in _call_impl

return forward_call(*args, **kwargs)

File "transformers\models\clip\modeling_clip.py", line 840, in forward

return self.text_model(

File "torch\nn\modules\module.py", line 1518, in _wrapped_call_impl

return self._call_impl(*args, **kwargs)

File "torch\nn\modules\module.py", line 1527, in _call_impl

return forward_call(*args, **kwargs)

File "transformers\models\clip\modeling_clip.py", line 745, in forward

encoder_outputs = self.encoder(

File "torch\nn\modules\module.py", line 1518, in _wrapped_call_impl

return self._call_impl(*args, **kwargs)

File "torch\nn\modules\module.py", line 1527, in _call_impl

return forward_call(*args, **kwargs)

File "transformers\models\clip\modeling_clip.py", line 656, in forward

layer_outputs = encoder_layer(

File "torch\nn\modules\module.py", line 1518, in _wrapped_call_impl

return self._call_impl(*args, **kwargs)

File "torch\nn\modules\module.py", line 1527, in _call_impl

return forward_call(*args, **kwargs)

File "transformers\models\clip\modeling_clip.py", line 385, in forward

hidden_states = self.layer_norm1(hidden_states)

File "torch\nn\modules\module.py", line 1518, in _wrapped_call_impl

return self._call_impl(*args, **kwargs)

File "torch\nn\modules\module.py", line 1527, in _call_impl

return forward_call(*args, **kwargs)

File "torch\nn\modules\normalization.py", line 196, in forward

return F.layer_norm(

File "torch\nn\functional.py", line 2543, in layer_norm

return torch.layer_norm(input, normalized_shape, weight, bias, eps, torch.backends.cudnn.enabled)

RuntimeError: "LayerNormKernelImpl" not implemented for 'Half'

INFO:enfugue:Reached maximum idle time after 15.2 seconds, exiting engine process

Can you fix this?

painebenjamin commented 10 months ago

Hello!

Very sorry you're experiencing issues. My experience is that this particular error message is usually a red herring, and this appears to be the case here - the message at the top stating DEBUG:enfugue:Inferencing on cpu, must use dtype bfloat16 tells me that Enfugue is having issue communicating with your GPU. Can you please tell me what GPU you're using? There could be a number of different steps to take depending on what your hardware setup is.

Thanks, and again sorry for the trouble!

AKDigitalAgency commented 10 months ago

WIN11 Intel Core i7 with HD520 internal GPU On ComfyUi & A1111 it runs only on CPU

TheDevilsKnock commented 9 months ago

Hello!

Very sorry you're experiencing issues. My experience is that this particular error message is usually a red herring, and this appears to be the case here - the message at the top stating DEBUG:enfugue:Inferencing on cpu, must use dtype bfloat16 tells me that Enfugue is having issue communicating with your GPU. Can you please tell me what GPU you're using? There could be a number of different steps to take depending on what your hardware setup is.

Thanks, and again sorry for the trouble!

Same error on full amd build, 7950x3D & 6900XT. Any particular steps to run?

Timestamp   Logger  Level   Content
2023-11-26 00:38:44 enfugue DEBUG   Traceback (most recent call last): File "enfugue\diffusion\process.py", line 185, in run response["result"] = self.handle(instruction_id, instruction_action, instruction_payload) File "enfugue\diffusion\process.py", line 250, in handle return self.execute_diffusion_plan( File "enfugue\diffusion\process.py", line 284, in execute_diffusion_plan return plan.execute( File "enfugue\diffusion\invocation.py", line 1256, in execute images, nsfw = self.execute_inference( File "enfugue\diffusion\invocation.py", line 1439, in execute_inference result = pipeline( File "enfugue\diffusion\manager.py", line 4620, in __call__ result = pipe( # type: ignore[assignment] File "torch\utils\_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "enfugue\diffusion\pipeline.py", line 3495, in __call__ these_prompt_embeds = self.encode_prompt( File "enfugue\diffusion\pipeline.py", line 1028, in encode_prompt prompt_embeds = text_encoder( File "torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "transformers\models\clip\modeling_clip.py", line 840, in forward return self.text_model( File "torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "transformers\models\clip\modeling_clip.py", line 745, in forward encoder_outputs = self.encoder( File "torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "transformers\models\clip\modeling_clip.py", line 656, in forward layer_outputs = encoder_layer( File "torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "transformers\models\clip\modeling_clip.py", line 385, in forward hidden_states = self.layer_norm1(hidden_states) File "torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "torch\nn\modules\normalization.py", line 196, in forward return F.layer_norm( File "torch\nn\functional.py", line 2543, in layer_norm return torch.layer_norm(input, normalized_shape, weight, bias, eps, torch.backends.cudnn.enabled) RuntimeError: "LayerNormKernelImpl" not implemented for 'Half'
2023-11-26 00:38:44 enfugue ERROR   Received error builtins.RuntimeError: "LayerNormKernelImpl" not implemented for 'Half'
2023-11-26 00:38:44 enfugue DEBUG   Deleting pipeline.
2023-11-26 00:38:44 enfugue DEBUG   Unloading pipeline for reason "invocation error"
2023-11-26 00:38:44 enfugue INFO    IP adapter None
2023-11-26 00:38:44 enfugue DEBUG   Calculated overall steps to be 21 - 0 image prompt embedding probe(s) + [1 chunk(s) * (0 encoding step(s) + (1 decoding step(s) * 1 frame(s)) + (1 temporal chunk(s) * 20 inference step(s))]
2023-11-26 00:38:44 enfugue DEBUG   Calling pipeline with arguments {'latent_callback': 'function', 'width': '3840', 'height': '2160', 'strength': '1', 'tile': '(False, False)', 'num_inference_steps': '20', 'num_results_per_prompt': '1', 'tiling_stride': '0', 'guidance_scale': '6.5', 'prompts': '[Prompt]', 'progress_callback': 'function', 'latent_callback_steps': '5', 'latent_callback_type': 'pil'}
2023-11-26 00:38:44 enfugue DEBUG   Instruction 9 beginning task “Executing Inference”
2023-11-26 00:38:43 urllib3.connectionpool  DEBUG   https://huggingface.co:443 "HEAD /openai/clip-vit-large-patch14/resolve/main/vocab.json HTTP/1.1" 200 0
2023-11-26 00:38:43 urllib3.connectionpool  DEBUG   https://huggingface.co:443 "HEAD /openai/clip-vit-large-patch14/resolve/main/vocab.json HTTP/1.1" 200 0
2023-11-26 00:38:43 enfugue DEBUG   Instruction 9 beginning task “Loading tokenizer openai/clip-vit-large-patch14”
2023-11-26 00:38:43 urllib3.connectionpool  DEBUG   https://huggingface.co:443 "HEAD /openai/clip-vit-large-patch14/resolve/main/config.json HTTP/1.1" 200 0
2023-11-26 00:38:43 urllib3.connectionpool  DEBUG   https://huggingface.co:443 "HEAD /openai/clip-vit-large-patch14/resolve/main/config.json HTTP/1.1" 200 0
2023-11-26 00:38:42 urllib3.connectionpool  DEBUG   https://huggingface.co:443 "HEAD /CompVis/stable-diffusion-safety-checker/resolve/main/preprocessor_config.json HTTP/1.1" 200 0
2023-11-26 00:38:42 urllib3.connectionpool  DEBUG   https://huggingface.co:443 "HEAD /CompVis/stable-diffusion-safety-checker/resolve/main/preprocessor_config.json HTTP/1.1" 200 0
2023-11-26 00:38:42 enfugue DEBUG   Instruction 9 beginning task “Initializing feature extractor from repository CompVis/stable-diffusion-safety-checker”
2023-11-26 00:38:40 urllib3.connectionpool  DEBUG   https://huggingface.co:443 "HEAD /CompVis/stable-diffusion-safety-checker/resolve/main/config.json HTTP/1.1" 200 0
2023-11-26 00:38:40 urllib3.connectionpool  DEBUG   https://huggingface.co:443 "HEAD /CompVis/stable-diffusion-safety-checker/resolve/main/config.json HTTP/1.1" 200 0
2023-11-26 00:38:40 enfugue DEBUG   Instruction 9 beginning task “Loading safety checker CompVis/stable-diffusion-safety-checker”
2023-11-26 00:38:40 urllib3.connectionpool  DEBUG   https://huggingface.co:443 "HEAD /madebyollin/taesd/resolve/main/config.json HTTP/1.1" 200 0
2023-11-26 00:38:40 urllib3.connectionpool  DEBUG   https://huggingface.co:443 "HEAD /madebyollin/taesd/resolve/main/config.json HTTP/1.1" 200 0
2023-11-26 00:38:39 urllib3.connectionpool  DEBUG   Starting new HTTPS connection (1): huggingface.co:443
2023-11-26 00:38:39 urllib3.connectionpool  DEBUG   Starting new HTTPS connection (1): huggingface.co:443
2023-11-26 00:38:39 enfugue DEBUG   Instruction 9 beginning task “Loading preview VAE madebyollin/taesd”
2023-11-26 00:38:39 enfugue DEBUG   Loading 248 keys into Autoencoder state dict (strict). Autoencoder scale is 0.18215
2023-11-26 00:38:39 enfugue DEBUG   Instruction 9 beginning task “Loading Default VAE”
2023-11-26 00:38:39 enfugue DEBUG   Loading 686 keys into UNet state dict (non-strict)
2023-11-26 00:38:35 enfugue DEBUG   Instruction 9 beginning task “Loading UNet”
2023-11-26 00:38:35 enfugue INFO    Checkpoint has 4 input channels
2023-11-26 00:38:35 enfugue INFO    No configuration file found for checkpoint v1-5-pruned, using Stable Diffusion 1.5
2023-11-26 00:38:32 enfugue DEBUG   Instruction 9 beginning task “Loading checkpoint file v1-5-pruned.ckpt”
2023-11-26 00:38:32 enfugue DEBUG   Instruction 9 beginning task “Loading checkpoint file v1-5-pruned.ckpt”
2023-11-26 00:38:30 enfugue DEBUG   Initializing pipeline from checkpoint at C:\Users\ukdic\.cache\enfugue\checkpoint\v1-5-pruned.ckpt. Arguments are {'cache_dir': 'C:\\Users\\ukdic\\.cache\\enfugue\\cache', 'engine_size': '512', 'tiling_stride': '0', 'requires_safety_checker': 'True', 'torch_dtype': 'dtype', 'force_full_precision_vae': 'False', 'controlnets': {}, 'ip_adapter': 'IPAdapter', 'task_callback': 'function', 'offload_models': 'False', 'load_safety_checker': 'True'}
2023-11-26 00:38:30 enfugue DEBUG   Inferencing on cpu, must use dtype bfloat16
2023-11-26 00:38:30 enfugue INFO    Dimensions do not match size of engine and chunking is disabled, disabling TensorRT
2023-11-26 00:38:30 enfugue DEBUG   Instruction 9 beginning task “Preparing Inference Pipeline”
2023-11-26 00:38:28 enfugue WARNING Ignored keyword arguments: {'intermediate_dir', 'intermediate_steps'}
2023-11-26 00:38:28 enfugue DEBUG   Received invocation payload, constructing plan.
2023-11-26 00:36:13 enfugue INFO    stderr (may not be an error:) DEBUG:root:Initializing MLIR with module: _site_initialize_0 DEBUG:root:Registering dialects from initializer DEBUG:jax._src.path:etils.epath was not found. Using pathlib for file I/O. DEBUG:enfugue:Instruction 1 beginning task “Preparing Inference Pipeline” INFO:enfugue:Dimensions do not match size of engine and chunking is disabled, disabling TensorRT DEBUG:enfugue:Inferencing on cpu, must use dtype bfloat16 DEBUG:enfugue:Initializing pipeline from checkpoint at C:\Users\ukdic\.cache\enfugue\checkpoint\v1-5-pruned.ckpt. Arguments are {'cache_dir': 'C:\\Users\\ukdic\\.cache\\enfugue\\cache', 'engine_size': '512', 'tiling_stride': '0', 'requires_safety_checker': 'True', 'torch_dtype': 'dtype', 'force_full_precision_vae': 'False', 'controlnets': {}, 'ip_adapter': 'IPAdapter', 'task_callback': 'function', 'offload_models': 'False', 'load_safety_checker': 'True'} torchvision\io\image.py:13: UserWarning: Failed to load image Python extension: ''If you don't plan on using image functionality from `torchvision.io`, you can ignore this warning. Otherwise, there might be something wrong with your environment. Did you have `libjpeg` or `libpng` installed before building `torchvision` from source? torch\_jit_internal.py:857: UserWarning: Unable to retrieve source for @torch.jit._overload function: . warnings.warn( torch\_jit_internal.py:857: UserWarning: Unable to retrieve source for @torch.jit._overload function: . warnings.warn( WARNING:tensorflow:AutoGraph is not available in this environment: functions lack code information. This is typical of some environments like the interactive Python shell. See https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/autograph/g3doc/reference/limitations.md#access-to-source-code for more information. DEBUG:tensorflow:Falling back to TensorFlow client; we recommended you install the Cloud TPU client directly with pip install cloud-tpu-client. DEBUG:h5py._conv:Creating converter from 7 to 5 DEBUG:h5py._conv:Creating converter from 5 to 7 DEBUG:h5py._conv:Creating converter from 7 to 5 DEBUG:h5py._conv:Creating converter from 5 to 7 DEBUG:enfugue:Instruction 1 beginning task “Loading checkpoint file v1-5-pruned.ckpt” DEBUG:matplotlib:CACHEDIR=C:\Users\ukdic\AppData\Local\Temp\tmpx8nybg02 DEBUG:matplotlib.font_manager:font search path [WindowsPath('C:/Users/ukdic/OneDrive/Documenti/enfugue-server/matplotlib/mpl-data/fonts/ttf'), WindowsPath('C:/Users/ukdic/OneDrive/Documenti/enfugue-server/matplotlib/mpl-data/fonts/afm'), WindowsPath('C:/Users/ukdic/OneDrive/Documenti/enfugue-server/matplotlib/mpl-data/fonts/pdfcorefonts')] INFO:matplotlib.font_manager:generated new fontManager INFO:enfugue:No configuration file found for checkpoint v1-5-pruned, using Stable Diffusion 1.5 INFO:enfugue:Checkpoint has 4 input channels DEBUG:enfugue:Instruction 1 beginning task “Loading UNet” In this conversion only the non-EMA weights are extracted. If you want to instead extract the EMA weights (usually better for inference), please make sure to add the `--extract_ema` flag. DEBUG:enfugue:Loading 686 keys into UNet state dict (non-strict) DEBUG:enfugue:Instruction 1 beginning task “Loading Default VAE” DEBUG:enfugue:Loading 248 keys into Autoencoder state dict (strict). Autoencoder scale is 0.18215 DEBUG:enfugue:Instruction 1 beginning task “Loading preview VAE madebyollin/taesd” DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): huggingface.co:443 DEBUG:http.client:send: b'HEAD /madebyollin/taesd/resolve/main/config.json HTTP/1.1\r\nHost: huggingface.co\r\nuser-agent: unknown/None; hf_hub/0.19.4; python/3.10.11; torch/2.1.1+cu118\r\nAccept-Encoding: identity\r\nAccept: */*\r\nConnection: keep-alive\r\nX-Amzn-Trace-Id: c34a5b96-79ee-480d-877a-52efe13edcc9\r\n\r\n' DEBUG:http.client:reply: 'HTTP/1.1 200 OK\r\n' DEBUG:http.client:header: Content-Type: text/plain; charset=utf-8 DEBUG:http.client:header: Content-Length: 551 DEBUG:http.client:header: Connection: keep-alive DEBUG:http.client:header: Date: Sat, 25 Nov 2023 23:34:49 GMT DEBUG:http.client:header: X-Powered-By: huggingface-moon DEBUG:http.client:header: X-Request-Id: Root=1-65628499-1439808d0cc6b6e22c1bce51;c34a5b96-79ee-480d-877a-52efe13edcc9 DEBUG:http.client:header: Access-Control-Allow-Origin: https://huggingface.co DEBUG:http.client:header: Vary: Origin DEBUG:http.client:header: Access-Control-Expose-Headers: X-Repo-Commit,X-Request-Id,X-Error-Code,X-Error-Message,ETag,Link,Accept-Ranges,Content-Range DEBUG:http.client:header: X-Robots-Tag: none DEBUG:http.client:header: X-Repo-Commit: 73b7d1c8836d16997316fb94c4b36a9095442c1d DEBUG:http.client:header: Accept-Ranges: bytes DEBUG:http.client:header: Content-Security-Policy: default-src none; sandbox DEBUG:http.client:header: ETag: "62f01c3eb447730d88eb61aca75331a564301e5a" DEBUG:http.client:header: X-Cache: Miss from cloudfront DEBUG:http.client:header: Via: 1.1 ac2d8660937db7980b895314178ccc8a.cloudfront.net (CloudFront) DEBUG:http.client:header: X-Amz-Cf-Pop: FCO50-C2 DEBUG:http.client:header: X-Amz-Cf-Id: 1bW6qgW12ERS0kGmbueORksZQhbLOq1x1gy3olf3NpxOvobEBpYPRA== DEBUG:urllib3.connectionpool:https://huggingface.co:443 "HEAD /madebyollin/taesd/resolve/main/config.json HTTP/1.1" 200 0 DEBUG:enfugue:Instruction 1 beginning task “Loading safety checker CompVis/stable-diffusion-safety-checker” DEBUG:http.client:send: b'HEAD /CompVis/stable-diffusion-safety-checker/resolve/main/config.json HTTP/1.1\r\nHost: huggingface.co\r\nuser-agent: unknown/None; hf_hub/0.19.4; python/3.10.11; torch/2.1.1+cu118\r\nAccept-Encoding: identity\r\nAccept: */*\r\nConnection: keep-alive\r\nX-Amzn-Trace-Id: da9fffa2-dca7-4f21-80a7-a26bd7d11a46\r\n\r\n' DEBUG:http.client:reply: 'HTTP/1.1 200 OK\r\n' DEBUG:http.client:header: Content-Type: text/plain; charset=utf-8 DEBUG:http.client:header: Content-Length: 4549 DEBUG:http.client:header: Connection: keep-alive DEBUG:http.client:header: Date: Sat, 25 Nov 2023 23:34:50 GMT DEBUG:http.client:header: X-Powered-By: huggingface-moon DEBUG:http.client:header: X-Request-Id: Root=1-6562849a-371eb5ec7de212455d943331;da9fffa2-dca7-4f21-80a7-a26bd7d11a46 DEBUG:http.client:header: Access-Control-Allow-Origin: https://huggingface.co DEBUG:http.client:header: Vary: Origin DEBUG:http.client:header: Access-Control-Expose-Headers: X-Repo-Commit,X-Request-Id,X-Error-Code,X-Error-Message,ETag,Link,Accept-Ranges,Content-Range DEBUG:http.client:header: X-Repo-Commit: cb41f3a270d63d454d385fc2e4f571c487c253c5 DEBUG:http.client:header: Accept-Ranges: bytes DEBUG:http.client:header: Content-Security-Policy: default-src none; sandbox DEBUG:http.client:header: ETag: "aa454d222558d7e095fc9343266efd9512ad2f19" DEBUG:http.client:header: X-Cache: Miss from cloudfront DEBUG:http.client:header: Via: 1.1 ac2d8660937db7980b895314178ccc8a.cloudfront.net (CloudFront) DEBUG:http.client:header: X-Amz-Cf-Pop: FCO50-C2 DEBUG:http.client:header: X-Amz-Cf-Id: P8YsObFtRvUCYWmBxVovxkZfHqQS1SMdVxDkXB-5AhP0TarHL1fKIg== DEBUG:urllib3.connectionpool:https://huggingface.co:443 "HEAD /CompVis/stable-diffusion-safety-checker/resolve/main/config.json HTTP/1.1" 200 0 `text_config_dict` is provided which will be used to initialize `CLIPTextConfig`. The value `text_config["id2label"]` will be overriden. `text_config_dict` is provided which will be used to initialize `CLIPTextConfig`. The value `text_config["bos_token_id"]` will be overriden. `text_config_dict` is provided which will be used to initialize `CLIPTextConfig`. The value `text_config["eos_token_id"]` will be overriden. DEBUG:enfugue:Instruction 1 beginning task “Initializing feature extractor from repository CompVis/stable-diffusion-safety-checker” DEBUG:http.client:send: b'HEAD /CompVis/stable-diffusion-safety-checker/resolve/main/preprocessor_config.json HTTP/1.1\r\nHost: huggingface.co\r\nuser-agent: unknown/None; hf_hub/0.19.4; python/3.10.11; torch/2.1.1+cu118\r\nAccept-Encoding: identity\r\nAccept: */*\r\nConnection: keep-alive\r\nX-Amzn-Trace-Id: e8e32227-7ac0-4d04-a6d5-1cb671e9d934\r\n\r\n' DEBUG:http.client:reply: 'HTTP/1.1 200 OK\r\n' DEBUG:http.client:header: Content-Type: text/plain; charset=utf-8 DEBUG:http.client:header: Content-Length: 342 DEBUG:http.client:header: Connection: keep-alive DEBUG:http.client:header: Date: Sat, 25 Nov 2023 23:34:52 GMT DEBUG:http.client:header: X-Powered-By: huggingface-moon DEBUG:http.client:header: X-Request-Id: Root=1-6562849c-33fcdc6062d0e3507ddb757f;e8e32227-7ac0-4d04-a6d5-1cb671e9d934 DEBUG:http.client:header: Access-Control-Allow-Origin: https://huggingface.co DEBUG:http.client:header: Vary: Origin DEBUG:http.client:header: Access-Control-Expose-Headers: X-Repo-Commit,X-Request-Id,X-Error-Code,X-Error-Message,ETag,Link,Accept-Ranges,Content-Range DEBUG:http.client:header: X-Repo-Commit: cb41f3a270d63d454d385fc2e4f571c487c253c5 DEBUG:http.client:header: Accept-Ranges: bytes DEBUG:http.client:header: Content-Security-Policy: default-src none; sandbox DEBUG:http.client:header: ETag: "5294955ff7801083f720b34b55d0f1f51313c5c5" DEBUG:http.client:header: X-Cache: Miss from cloudfront DEBUG:http.client:header: Via: 1.1 ac2d8660937db7980b895314178ccc8a.cloudfront.net (CloudFront) DEBUG:http.client:header: X-Amz-Cf-Pop: FCO50-C2 DEBUG:http.client:header: X-Amz-Cf-Id: aLUCig2c0VRg0o5j2BPIXxitduaQkuVhHcx1Okp_dUh8LYQ7pKQVvQ== DEBUG:urllib3.connectionpool:https://huggingface.co:443 "HEAD /CompVis/stable-diffusion-safety-checker/resolve/main/preprocessor_config.json HTTP/1.1" 200 0 transformers\models\clip\feature_extraction_clip.py:28: FutureWarning: The class CLIPFeatureExtractor is deprecated and will be removed in version 5 of Transformers. Please use CLIPImageProcessor instead. warnings.warn( DEBUG:http.client:send: b'HEAD /openai/clip-vit-large-patch14/resolve/main/config.json HTTP/1.1\r\nHost: huggingface.co\r\nuser-agent: unknown/None; hf_hub/0.19.4; python/3.10.11; torch/2.1.1+cu118\r\nAccept-Encoding: identity\r\nAccept: */*\r\nConnection: keep-alive\r\nX-Amzn-Trace-Id: d39c5496-52e0-4530-a065-282ea2018363\r\n\r\n' DEBUG:http.client:reply: 'HTTP/1.1 200 OK\r\n' DEBUG:http.client:header: Content-Type: text/plain; charset=utf-8 DEBUG:http.client:header: Content-Length: 4519 DEBUG:http.client:header: Connection: keep-alive DEBUG:http.client:header: Date: Sat, 25 Nov 2023 23:34:52 GMT DEBUG:http.client:header: X-Powered-By: huggingface-moon DEBUG:http.client:header: X-Request-Id: Root=1-6562849c-30f9f13504476dc4394772d9;d39c5496-52e0-4530-a065-282ea2018363 DEBUG:http.client:header: Access-Control-Allow-Origin: https://huggingface.co DEBUG:http.client:header: Vary: Origin DEBUG:http.client:header: Access-Control-Expose-Headers: X-Repo-Commit,X-Request-Id,X-Error-Code,X-Error-Message,ETag,Link,Accept-Ranges,Content-Range DEBUG:http.client:header: X-Repo-Commit: 32bd64288804d66eefd0ccbe215aa642df71cc41 DEBUG:http.client:header: Accept-Ranges: bytes DEBUG:http.client:header: Content-Security-Policy: default-src none; sandbox DEBUG:http.client:header: ETag: "2c19f6666e0e163c7954df66cb901353fcad088e" DEBUG:http.client:header: X-Cache: Miss from cloudfront DEBUG:http.client:header: Via: 1.1 ac2d8660937db7980b895314178ccc8a.cloudfront.net (CloudFront) DEBUG:http.client:header: X-Amz-Cf-Pop: FCO50-C2 DEBUG:http.client:header: X-Amz-Cf-Id: Ya0CgXh0XoQZV04XjhudAjficdxRttlxweawA2e9FR4lIOtSKl9RzA== DEBUG:urllib3.connectionpool:https://huggingface.co:443 "HEAD /openai/clip-vit-large-patch14/resolve/main/config.json HTTP/1.1" 200 0 DEBUG:enfugue:Instruction 1 beginning task “Loading tokenizer openai/clip-vit-large-patch14” DEBUG:http.client:send: b'HEAD /openai/clip-vit-large-patch14/resolve/main/vocab.json HTTP/1.1\r\nHost: huggingface.co\r\nuser-agent: unknown/None; hf_hub/0.19.4; python/3.10.11; torch/2.1.1+cu118\r\nAccept-Encoding: identity\r\nAccept: */*\r\nConnection: keep-alive\r\nX-Amzn-Trace-Id: b7485885-8cec-422d-8f0e-1fe424009e75\r\n\r\n' DEBUG:http.client:reply: 'HTTP/1.1 200 OK\r\n' DEBUG:http.client:header: Content-Type: text/plain; charset=utf-8 DEBUG:http.client:header: Content-Length: 961143 DEBUG:http.client:header: Connection: keep-alive DEBUG:http.client:header: Date: Sat, 25 Nov 2023 23:34:53 GMT DEBUG:http.client:header: X-Powered-By: huggingface-moon DEBUG:http.client:header: X-Request-Id: Root=1-6562849d-0fdb4bca174b698514333833;b7485885-8cec-422d-8f0e-1fe424009e75 DEBUG:http.client:header: Access-Control-Allow-Origin: https://huggingface.co DEBUG:http.client:header: Vary: Origin DEBUG:http.client:header: Access-Control-Expose-Headers: X-Repo-Commit,X-Request-Id,X-Error-Code,X-Error-Message,ETag,Link,Accept-Ranges,Content-Range DEBUG:http.client:header: X-Repo-Commit: 32bd64288804d66eefd0ccbe215aa642df71cc41 DEBUG:http.client:header: Accept-Ranges: bytes DEBUG:http.client:header: Content-Security-Policy: default-src none; sandbox DEBUG:http.client:header: ETag: "4297ea6a8d2bae1fea8f48b45e257814dcb11f69" DEBUG:http.client:header: X-Cache: Miss from cloudfront DEBUG:http.client:header: Via: 1.1 ac2d8660937db7980b895314178ccc8a.cloudfront.net (CloudFront) DEBUG:http.client:header: X-Amz-Cf-Pop: FCO50-C2 DEBUG:http.client:header: X-Amz-Cf-Id: IyC3tKAgnNkU6X6zq4VbrlyCeILxfNvX6b6QYnchgOEkZemsvp9gpQ== DEBUG:urllib3.connectionpool:https://huggingface.co:443 "HEAD /openai/clip-vit-large-patch14/resolve/main/vocab.json HTTP/1.1" 200 0 diffusers\pipelines\pipeline_utils.py:761: FutureWarning: `torch_dtype` is deprecated and will be removed in version 0.25.0. deprecate("torch_dtype", "0.25.0", "") DEBUG:enfugue:Instruction 1 beginning task “Executing Inference” DEBUG:enfugue:Calling pipeline with arguments {'latent_callback': 'function', 'width': '3840', 'height': '2160', 'strength': '1', 'tile': '(False, False)', 'num_inference_steps': '20', 'num_results_per_prompt': '1', 'tiling_stride': '0', 'guidance_scale': '6.5', 'prompts': '[Prompt]', 'progress_callback': 'function', 'latent_callback_steps': '5', 'latent_callback_type': 'pil'} DEBUG:enfugue:Calculated overall steps to be 21 - 0 image prompt embedding probe(s) + [1 chunk(s) * (0 encoding step(s) + (1 decoding step(s) * 1 frame(s)) + (1 temporal chunk(s) * 20 inference step(s))] INFO:enfugue:IP adapter None DEBUG:enfugue:Unloading pipeline for reason "invocation error" DEBUG:enfugue:Deleting pipeline. ERROR:enfugue:Received error builtins.RuntimeError: "LayerNormKernelImpl" not implemented for 'Half' DEBUG:enfugue:Traceback (most recent call last): File "enfugue\diffusion\process.py", line 185, in run response["result"] = self.handle(instruction_id, instruction_action, instruction_payload) File "enfugue\diffusion\process.py", line 250, in handle return self.execute_diffusion_plan( File "enfugue\diffusion\process.py", line 284, in execute_diffusion_plan return plan.execute( File "enfugue\diffusion\invocation.py", line 1256, in execute images, nsfw = self.execute_inference( File "enfugue\diffusion\invocation.py", line 1439, in execute_inference result = pipeline( File "enfugue\diffusion\manager.py", line 4620, in __call__ result = pipe( # type: ignore[assignment] File "torch\utils\_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "enfugue\diffusion\pipeline.py", line 3495, in __call__ these_prompt_embeds = self.encode_prompt( File "enfugue\diffusion\pipeline.py", line 1028, in encode_prompt prompt_embeds = text_encoder( File "torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "transformers\models\clip\modeling_clip.py", line 840, in forward return self.text_model( File "torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "transformers\models\clip\modeling_clip.py", line 745, in forward encoder_outputs = self.encoder( File "torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "transformers\models\clip\modeling_clip.py", line 656, in forward layer_outputs = encoder_layer( File "torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "transformers\models\clip\modeling_clip.py", line 385, in forward hidden_states = self.layer_norm1(hidden_states) File "torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "torch\nn\modules\normalization.py", line 196, in forward return F.layer_norm( File "torch\nn\functional.py", line 2543, in layer_norm return torch.layer_norm(input, normalized_shape, weight, bias, eps, torch.backends.cudnn.enabled) RuntimeError: "LayerNormKernelImpl" not implemented for 'Half' INFO:enfugue:Reached maximum idle time after 15.1 seconds, exiting engine process
2023-11-26 00:36:13 enfugue INFO    Reached maximum idle time after 15.1 seconds, exiting engine process
2023-11-26 00:34:54 enfugue DEBUG   Traceback (most recent call last): File "enfugue\diffusion\process.py", line 185, in run response["result"] = self.handle(instruction_id, instruction_action, instruction_payload) File "enfugue\diffusion\process.py", line 250, in handle return self.execute_diffusion_plan( File "enfugue\diffusion\process.py", line 284, in execute_diffusion_plan return plan.execute( File "enfugue\diffusion\invocation.py", line 1256, in execute images, nsfw = self.execute_inference( File "enfugue\diffusion\invocation.py", line 1439, in execute_inference result = pipeline( File "enfugue\diffusion\manager.py", line 4620, in __call__ result = pipe( # type: ignore[assignment] File "torch\utils\_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "enfugue\diffusion\pipeline.py", line 3495, in __call__ these_prompt_embeds = self.encode_prompt( File "enfugue\diffusion\pipeline.py", line 1028, in encode_prompt prompt_embeds = text_encoder( File "torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "transformers\models\clip\modeling_clip.py", line 840, in forward return self.text_model( File "torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "transformers\models\clip\modeling_clip.py", line 745, in forward encoder_outputs = self.encoder( File "torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "transformers\models\clip\modeling_clip.py", line 656, in forward layer_outputs = encoder_layer( File "torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "transformers\models\clip\modeling_clip.py", line 385, in forward hidden_states = self.layer_norm1(hidden_states) File "torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "torch\nn\modules\normalization.py", line 196, in forward return F.layer_norm( File "torch\nn\functional.py", line 2543, in layer_norm return torch.layer_norm(input, normalized_shape, weight, bias, eps, torch.backends.cudnn.enabled) RuntimeError: "LayerNormKernelImpl" not implemented for 'Half'
2023-11-26 00:34:54 enfugue ERROR   Received error builtins.RuntimeError: "LayerNormKernelImpl" not implemented for 'Half'
2023-11-26 00:34:54 enfugue DEBUG   Deleting pipeline.
2023-11-26 00:34:54 enfugue DEBUG   Unloading pipeline for reason "invocation error"
2023-11-26 00:34:53 enfugue INFO    IP adapter None
2023-11-26 00:34:53 enfugue DEBUG   Calculated overall steps to be 21 - 0 image prompt embedding probe(s) + [1 chunk(s) * (0 encoding step(s) + (1 decoding step(s) * 1 frame(s)) + (1 temporal chunk(s) * 20 inference step(s))]
2023-11-26 00:34:53 enfugue DEBUG   Calling pipeline with arguments {'latent_callback': 'function', 'width': '3840', 'height': '2160', 'strength': '1', 'tile': '(False, False)', 'num_inference_steps': '20', 'num_results_per_prompt': '1', 'tiling_stride': '0', 'guidance_scale': '6.5', 'prompts': '[Prompt]', 'progress_callback': 'function', 'latent_callback_steps': '5', 'latent_callback_type': 'pil'}
2023-11-26 00:34:53 enfugue DEBUG   Instruction 1 beginning task “Executing Inference”
2023-11-26 00:34:52 urllib3.connectionpool  DEBUG   https://huggingface.co:443 "HEAD /openai/clip-vit-large-patch14/resolve/main/vocab.json HTTP/1.1" 200 0
2023-11-26 00:34:52 urllib3.connectionpool  DEBUG   https://huggingface.co:443 "HEAD /openai/clip-vit-large-patch14/resolve/main/vocab.json HTTP/1.1" 200 0
2023-11-26 00:34:52 enfugue DEBUG   Instruction 1 beginning task “Loading tokenizer openai/clip-vit-large-patch14”
2023-11-26 00:34:52 urllib3.connectionpool  DEBUG   https://huggingface.co:443 "HEAD /openai/clip-vit-large-patch14/resolve/main/config.json HTTP/1.1" 200 0
2023-11-26 00:34:52 urllib3.connectionpool  DEBUG   https://huggingface.co:443 "HEAD /openai/clip-vit-large-patch14/resolve/main/config.json HTTP/1.1" 200 0
2023-11-26 00:34:52 urllib3.connectionpool  DEBUG   https://huggingface.co:443 "HEAD /CompVis/stable-diffusion-safety-checker/resolve/main/preprocessor_config.json HTTP/1.1" 200 0
2023-11-26 00:34:52 urllib3.connectionpool  DEBUG   https://huggingface.co:443 "HEAD /CompVis/stable-diffusion-safety-checker/resolve/main/preprocessor_config.json HTTP/1.1" 200 0
2023-11-26 00:34:51 enfugue DEBUG   Instruction 1 beginning task “Initializing feature extractor from repository CompVis/stable-diffusion-safety-checker”
2023-11-26 00:34:49 urllib3.connectionpool  DEBUG   https://huggingface.co:443 "HEAD /CompVis/stable-diffusion-safety-checker/resolve/main/config.json HTTP/1.1" 200 0
2023-11-26 00:34:49 urllib3.connectionpool  DEBUG   https://huggingface.co:443 "HEAD /CompVis/stable-diffusion-safety-checker/resolve/main/config.json HTTP/1.1" 200 0
2023-11-26 00:34:49 enfugue DEBUG   Instruction 1 beginning task “Loading safety checker CompVis/stable-diffusion-safety-checker”
2023-11-26 00:34:49 urllib3.connectionpool  DEBUG   https://huggingface.co:443 "HEAD /madebyollin/taesd/resolve/main/config.json HTTP/1.1" 200 0
2023-11-26 00:34:49 urllib3.connectionpool  DEBUG   https://huggingface.co:443 "HEAD /madebyollin/taesd/resolve/main/config.json HTTP/1.1" 200 0
2023-11-26 00:34:48 urllib3.connectionpool  DEBUG   Starting new HTTPS connection (1): huggingface.co:443
2023-11-26 00:34:48 urllib3.connectionpool  DEBUG   Starting new HTTPS connection (1): huggingface.co:443
2023-11-26 00:34:48 enfugue DEBUG   Instruction 1 beginning task “Loading preview VAE madebyollin/taesd”
2023-11-26 00:34:48 enfugue DEBUG   Loading 248 keys into Autoencoder state dict (strict). Autoencoder scale is 0.18215
2023-11-26 00:34:48 enfugue DEBUG   Instruction 1 beginning task “Loading Default VAE”
2023-11-26 00:34:48 enfugue DEBUG   Loading 686 keys into UNet state dict (non-strict)
2023-11-26 00:34:45 enfugue DEBUG   Instruction 1 beginning task “Loading UNet”
2023-11-26 00:34:45 enfugue INFO    Checkpoint has 4 input channels
2023-11-26 00:34:45 enfugue INFO    No configuration file found for checkpoint v1-5-pruned, using Stable Diffusion 1.5
2023-11-26 00:34:40 enfugue DEBUG   Instruction 1 beginning task “Loading checkpoint file v1-5-pruned.ckpt”
2023-11-26 00:34:39 enfugue DEBUG   Initializing pipeline from checkpoint at C:\Users\ukdic\.cache\enfugue\checkpoint\v1-5-pruned.ckpt. Arguments are {'cache_dir': 'C:\\Users\\ukdic\\.cache\\enfugue\\cache', 'engine_size': '512', 'tiling_stride': '0', 'requires_safety_checker': 'True', 'torch_dtype': 'dtype', 'force_full_precision_vae': 'False', 'controlnets': {}, 'ip_adapter': 'IPAdapter', 'task_callback': 'function', 'offload_models': 'False', 'load_safety_checker': 'True'}
2023-11-26 00:34:39 enfugue DEBUG   Inferencing on cpu, must use dtype bfloat16
2023-11-26 00:34:39 enfugue INFO    Dimensions do not match size of engine and chunking is disabled, disabling TensorRT
2023-11-26 00:34:39 enfugue DEBUG   Instruction 1 beginning task “Preparing Inference Pipeline”
2023-11-26 00:34:37 enfugue WARNING Ignored keyword arguments: {'intermediate_steps', 'intermediate_dir'}
2023-11-26 00:34:37 enfugue DEBUG   Received invocation payload, constructing plan.
2023-11-25 23:58:00 enfugue INFO    stderr (may not be an error:) DEBUG:root:Initializing MLIR with module: _site_initialize_0 DEBUG:root:Registering dialects from initializer DEBUG:jax._src.path:etils.epath was not found. Using pathlib for file I/O. The cache for model files in Transformers v4.22.0 has been updated. Migrating your old cache. This is a one-time only operation. You can interrupt this and resume the migration later on by calling `transformers.utils.move_cache()`. 0it [00:00, ?it/s] 0it [00:00, ?it/s] DEBUG:enfugue:Instruction 1 beginning task “Downloading https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned.ckpt” INFO:enfugue:Downloading file from https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned.ckpt. Will write to C:\Users\ukdic\.cache\enfugue\checkpoint\v1-5-pruned.ckpt DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): huggingface.co:443 DEBUG:http.client:send: b'GET /runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned.ckpt HTTP/1.1\r\nHost: huggingface.co\r\nUser-Agent: python-requests/2.31.0\r\nAccept-Encoding: gzip, deflate, br, zstd\r\nAccept: */*\r\nConnection: keep-alive\r\n\r\n' DEBUG:http.client:reply: 'HTTP/1.1 302 Found\r\n' DEBUG:http.client:header: Content-Type: text/plain; charset=utf-8 DEBUG:http.client:header: Content-Length: 1125 DEBUG:http.client:header: Connection: keep-alive DEBUG:http.client:header: Date: Sat, 25 Nov 2023 22:54:10 GMT DEBUG:http.client:header: X-Powered-By: huggingface-moon DEBUG:http.client:header: X-Request-Id: Root=1-65627b12-57236349563c830541727113 DEBUG:http.client:header: Access-Control-Allow-Origin: https://huggingface.co DEBUG:http.client:header: Vary: Origin, Accept DEBUG:http.client:header: Access-Control-Expose-Headers: X-Repo-Commit,X-Request-Id,X-Error-Code,X-Error-Message,ETag,Link,Accept-Ranges,Content-Range DEBUG:http.client:header: X-Repo-Commit: 1d0c4ebf6ff58a5caecab40fa1406526bca4b5b9 DEBUG:http.client:header: Accept-Ranges: bytes DEBUG:http.client:header: X-Linked-Size: 7703807346 DEBUG:http.client:header: X-Linked-ETag: "e1441589a6f3c5a53f5f54d0975a18a7feb7cdf0b0dee276dfc3331ae376a053" DEBUG:http.client:header: Location: https://cdn-lfs.huggingface.co/repos/6b/20/6b201da5f0f5c60524535ebb7deac2eef68605655d3bbacfee9cce0087f3b3f5/e1441589a6f3c5a53f5f54d0975a18a7feb7cdf0b0dee276dfc3331ae376a053?response-content-disposition=attachment%3B+filename*%3DUTF-8%27%27v1-5-pruned.ckpt%3B+filename%3D%22v1-5-pruned.ckpt%22%3B&Expires=1701212050&Policy=eyJTdGF0ZW1lbnQiOlt7IkNvbmRpdGlvbiI6eyJEYXRlTGVzc1RoYW4iOnsiQVdTOkVwb2NoVGltZSI6MTcwMTIxMjA1MH19LCJSZXNvdXJjZSI6Imh0dHBzOi8vY2RuLWxmcy5odWdnaW5nZmFjZS5jby9yZXBvcy82Yi8yMC82YjIwMWRhNWYwZjVjNjA1MjQ1MzVlYmI3ZGVhYzJlZWY2ODYwNTY1NWQzYmJhY2ZlZTljY2UwMDg3ZjNiM2Y1L2UxNDQxNTg5YTZmM2M1YTUzZjVmNTRkMDk3NWExOGE3ZmViN2NkZjBiMGRlZTI3NmRmYzMzMzFhZTM3NmEwNTM%7EcmVzcG9uc2UtY29udGVudC1kaXNwb3NpdGlvbj0qIn1dfQ__&Signature=k-nx5MiqgV%7ELORccI19XofW426iFbvukQW4f-3UgB3sTXB4qlOkvSZ-fe7MIPbrRpxUayxJisASEQUIGMGT1WzOTCdyjgt3lPqIyzG82u4Xr0951uzStjhEVEqnoebf-gUSZKx08kPJQj8qRVt5tAyUGqvpvJmlNaqBG7Xf-uWWqT7GwwAmGjvpJwN0NTwiusl%7EQnNcCRJ4w6TxMDTOiLj2bTmuuoyLl%7E65kcjvSQkS0Jie16N8scbrdfNujlgBMCP6du-LQtqBpIKzb57w-0BEFLyXi5PGnn43c9B0OeyT2IXOjWaocBR0oJPVtXIOmqarWlLCjOppmB6KqtuvHgQ__&Key-Pair-Id=KVTP0A1DKRTAX DEBUG:http.client:header: X-Cache: Miss from cloudfront DEBUG:http.client:header: Via: 1.1 579fb5fb59c39183ae29e5b1ad2abbbe.cloudfront.net (CloudFront) DEBUG:http.client:header: X-Amz-Cf-Pop: FCO50-C2 DEBUG:http.client:header: X-Amz-Cf-Id: He3KNVePMWinhkBGkO8oMfK5k4nj_-Bzsdp7bLepRhqjXWbJ9CvK-Q== DEBUG:urllib3.connectionpool:https://huggingface.co:443 "GET /runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned.ckpt HTTP/1.1" 302 1125 DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): cdn-lfs.huggingface.co:443 DEBUG:http.client:send: b'GET /repos/6b/20/6b201da5f0f5c60524535ebb7deac2eef68605655d3bbacfee9cce0087f3b3f5/e1441589a6f3c5a53f5f54d0975a18a7feb7cdf0b0dee276dfc3331ae376a053?response-content-disposition=attachment%3B+filename*%3DUTF-8%27%27v1-5-pruned.ckpt%3B+filename%3D%22v1-5-pruned.ckpt%22%3B&Expires=1701212050&Policy=eyJTdGF0ZW1lbnQiOlt7IkNvbmRpdGlvbiI6eyJEYXRlTGVzc1RoYW4iOnsiQVdTOkVwb2NoVGltZSI6MTcwMTIxMjA1MH19LCJSZXNvdXJjZSI6Imh0dHBzOi8vY2RuLWxmcy5odWdnaW5nZmFjZS5jby9yZXBvcy82Yi8yMC82YjIwMWRhNWYwZjVjNjA1MjQ1MzVlYmI3ZGVhYzJlZWY2ODYwNTY1NWQzYmJhY2ZlZTljY2UwMDg3ZjNiM2Y1L2UxNDQxNTg5YTZmM2M1YTUzZjVmNTRkMDk3NWExOGE3ZmViN2NkZjBiMGRlZTI3NmRmYzMzMzFhZTM3NmEwNTM~cmVzcG9uc2UtY29udGVudC1kaXNwb3NpdGlvbj0qIn1dfQ__&Signature=k-nx5MiqgV~LORccI19XofW426iFbvukQW4f-3UgB3sTXB4qlOkvSZ-fe7MIPbrRpxUayxJisASEQUIGMGT1WzOTCdyjgt3lPqIyzG82u4Xr0951uzStjhEVEqnoebf-gUSZKx08kPJQj8qRVt5tAyUGqvpvJmlNaqBG7Xf-uWWqT7GwwAmGjvpJwN0NTwiusl~QnNcCRJ4w6TxMDTOiLj2bTmuuoyLl~65kcjvSQkS0Jie16N8scbrdfNujlgBMCP6du-LQtqBpIKzb57w-0BEFLyXi5PGnn43c9B0OeyT2IXOjWaocBR0oJPVtXIOmqarWlLCjOppmB6KqtuvHgQ__&Key-Pair-Id=KVTP0A1DKRTAX HTTP/1.1\r\nHost: cdn-lfs.huggingface.co\r\nUser-Agent: python-requests/2.31.0\r\nAccept-Encoding: gzip, deflate, br, zstd\r\nAccept: */*\r\nConnection: keep-alive\r\n\r\n' DEBUG:http.client:reply: 'HTTP/1.1 200 OK\r\n' DEBUG:http.client:header: Content-Type: binary/octet-stream DEBUG:http.client:header: Content-Length: 7703807346 DEBUG:http.client:header: Connection: keep-alive DEBUG:http.client:header: Last-Modified: Thu, 20 Oct 2022 12:04:48 GMT DEBUG:http.client:header: x-amz-storage-class: INTELLIGENT_TIERING DEBUG:http.client:header: x-amz-server-side-encryption: AES256 DEBUG:http.client:header: x-amz-version-id: BFBjjeCwpKzphP69jHCsu0tXSVXyZiD0 DEBUG:http.client:header: Content-Disposition: attachment; filename*=UTF-8''v1-5-pruned.ckpt; filename="v1-5-pruned.ckpt"; DEBUG:http.client:header: Accept-Ranges: bytes DEBUG:http.client:header: Server: AmazonS3 DEBUG:http.client:header: Date: Sat, 25 Nov 2023 12:37:58 GMT DEBUG:http.client:header: ETag: "37c7380e5122b52e5a82912076eff236-2" DEBUG:http.client:header: X-Cache: Hit from cloudfront DEBUG:http.client:header: Via: 1.1 4112796e19d8f7c1c420cb3d9f689a00.cloudfront.net (CloudFront) DEBUG:http.client:header: X-Amz-Cf-Pop: FCO50-P2 DEBUG:http.client:header: X-Amz-Cf-Id: wmiplcefI7gwWKgcRyMcfP73C8Ntsq3Ptj4PmoB9hFTf-LpIM3dVqA== DEBUG:http.client:header: Age: 36973 DEBUG:http.client:header: cache-control: public, max-age=604800, immutable, s-maxage=604800 DEBUG:http.client:header: Vary: Origin DEBUG:urllib3.connectionpool:https://cdn-lfs.huggingface.co:443 "GET /repos/6b/20/6b201da5f0f5c60524535ebb7deac2eef68605655d3bbacfee9cce0087f3b3f5/e1441589a6f3c5a53f5f54d0975a18a7feb7cdf0b0dee276dfc3331ae376a053?response-content-disposition=attachment%3B+filename*%3DUTF-8%27%27v1-5-pruned.ckpt%3B+filename%3D%22v1-5-pruned.ckpt%22%3B&Expires=1701212050&Policy=eyJTdGF0ZW1lbnQiOlt7IkNvbmRpdGlvbiI6eyJEYXRlTGVzc1RoYW4iOnsiQVdTOkVwb2NoVGltZSI6MTcwMTIxMjA1MH19LCJSZXNvdXJjZSI6Imh0dHBzOi8vY2RuLWxmcy5odWdnaW5nZmFjZS5jby9yZXBvcy82Yi8yMC82YjIwMWRhNWYwZjVjNjA1MjQ1MzVlYmI3ZGVhYzJlZWY2ODYwNTY1NWQzYmJhY2ZlZTljY2UwMDg3ZjNiM2Y1L2UxNDQxNTg5YTZmM2M1YTUzZjVmNTRkMDk3NWExOGE3ZmViN2NkZjBiMGRlZTI3NmRmYzMzMzFhZTM3NmEwNTM~cmVzcG9uc2UtY29udGVudC1kaXNwb3NpdGlvbj0qIn1dfQ__&Signature=k-nx5MiqgV~LORccI19XofW426iFbvukQW4f-3UgB3sTXB4qlOkvSZ-fe7MIPbrRpxUayxJisASEQUIGMGT1WzOTCdyjgt3lPqIyzG82u4Xr0951uzStjhEVEqnoebf-gUSZKx08kPJQj8qRVt5tAyUGqvpvJmlNaqBG7Xf-uWWqT7GwwAmGjvpJwN0NTwiusl~QnNcCRJ4w6TxMDTOiLj2bTmuuoyLl~65kcjvSQkS0Jie16N8scbrdfNujlgBMCP6du-LQtqBpIKzb57w-0BEFLyXi5PGnn43c9B0OeyT2IXOjWaocBR0oJPVtXIOmqarWlLCjOppmB6KqtuvHgQ__&Key-Pair-Id=KVTP0A1DKRTAX HTTP/1.1" 200 7703807346 DEBUG:enfugue:Instruction 1 beginning task “Downloading https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned.ckpt: 5.5% (425.20 MB/7.70 GB)” DEBUG:enfugue:Instruction 1 beginning task “Downloading https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned.ckpt: 12.7% (976.31 MB/7.70 GB)” DEBUG:enfugue:Instruction 1 beginning task “Downloading https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned.ckpt: 19.1% (1.47 GB/7.70 GB)” DEBUG:enfugue:Pipeline still initializing. Please wait. DEBUG:enfugue:Instruction 1 beginning task “Downloading https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned.ckpt: 26.0% (2.00 GB/7.70 GB)” DEBUG:enfugue:Instruction 1 beginning task “Downloading https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned.ckpt: 31.9% (2.46 GB/7.70 GB)” DEBUG:enfugue:Instruction 1 beginning task “Downloading https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned.ckpt: 39.1% (3.01 GB/7.70 GB)” DEBUG:enfugue:Pipeline still initializing. Please wait. DEBUG:enfugue:Instruction 1 beginning task “Downloading https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned.ckpt: 46.6% (3.59 GB/7.70 GB)” DEBUG:enfugue:Instruction 1 beginning task “Downloading https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned.ckpt: 53.2% (4.10 GB/7.70 GB)” DEBUG:enfugue:Instruction 1 beginning task “Downloading https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned.ckpt: 60.6% (4.67 GB/7.70 GB)” DEBUG:enfugue:Pipeline still initializing. Please wait. DEBUG:enfugue:Instruction 1 beginning task “Downloading https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned.ckpt: 66.7% (5.14 GB/7.70 GB)” DEBUG:enfugue:Instruction 1 beginning task “Downloading https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned.ckpt: 74.2% (5.72 GB/7.70 GB)” DEBUG:enfugue:Instruction 1 beginning task “Downloading https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned.ckpt: 81.2% (6.26 GB/7.70 GB)” DEBUG:enfugue:Pipeline still initializing. Please wait. DEBUG:enfugue:Instruction 1 beginning task “Downloading https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned.ckpt: 87.5% (6.74 GB/7.70 GB)” DEBUG:enfugue:Instruction 1 beginning task “Downloading https://huggingface.co/runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned.ckpt: 94.5% (7.28 GB/7.70 GB)” DEBUG:enfugue:Instruction 1 beginning task “Preparing Inference Pipeline” DEBUG:enfugue:Inferencing on cpu, must use dtype bfloat16 DEBUG:enfugue:Initializing pipeline from checkpoint at C:\Users\ukdic\.cache\enfugue\checkpoint\v1-5-pruned.ckpt. Arguments are {'cache_dir': 'C:\\Users\\ukdic\\.cache\\enfugue\\cache', 'engine_size': '512', 'tiling_stride': '0', 'requires_safety_checker': 'True', 'torch_dtype': 'dtype', 'force_full_precision_vae': 'False', 'controlnets': {}, 'ip_adapter': 'IPAdapter', 'task_callback': 'function', 'offload_models': 'False', 'load_safety_checker': 'True'} torchvision\io\image.py:13: UserWarning: Failed to load image Python extension: ''If you don't plan on using image functionality from `torchvision.io`, you can ignore this warning. Otherwise, there might be something wrong with your environment. Did you have `libjpeg` or `libpng` installed before building `torchvision` from source? torch\_jit_internal.py:857: UserWarning: Unable to retrieve source for @torch.jit._overload function: . warnings.warn( torch\_jit_internal.py:857: UserWarning: Unable to retrieve source for @torch.jit._overload function: . warnings.warn( WARNING:tensorflow:AutoGraph is not available in this environment: functions lack code information. This is typical of some environments like the interactive Python shell. See https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/autograph/g3doc/reference/limitations.md#access-to-source-code for more information. DEBUG:tensorflow:Falling back to TensorFlow client; we recommended you install the Cloud TPU client directly with pip install cloud-tpu-client. DEBUG:h5py._conv:Creating converter from 7 to 5 DEBUG:h5py._conv:Creating converter from 5 to 7 DEBUG:h5py._conv:Creating converter from 7 to 5 DEBUG:h5py._conv:Creating converter from 5 to 7 DEBUG:enfugue:Instruction 1 beginning task “Loading checkpoint file v1-5-pruned.ckpt” DEBUG:enfugue:Pipeline still initializing. Please wait. DEBUG:matplotlib:CACHEDIR=C:\Users\ukdic\AppData\Local\Temp\tmpc5agm6d5 DEBUG:matplotlib.font_manager:font search path [WindowsPath('C:/Users/ukdic/Downloads/enfugue-server-0.3.1-win-cuda-x86_64.zip/enfugue-server/matplotlib/mpl-data/fonts/ttf'), WindowsPath('C:/Users/ukdic/Downloads/enfugue-server-0.3.1-win-cuda-x86_64.zip/enfugue-server/matplotlib/mpl-data/fonts/afm'), WindowsPath('C:/Users/ukdic/Downloads/enfugue-server-0.3.1-win-cuda-x86_64.zip/enfugue-server/matplotlib/mpl-data/fonts/pdfcorefonts')] INFO:matplotlib.font_manager:generated new fontManager INFO:enfugue:No configuration file found for checkpoint v1-5-pruned, using Stable Diffusion 1.5 INFO:enfugue:Downloading file from https://raw.githubusercontent.com/CompVis/stable-diffusion/main/configs/stable-diffusion/v1-inference.yaml. Will write to C:\Users\ukdic\.cache\enfugue\cache\v1-inference.yaml DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): raw.githubusercontent.com:443 DEBUG:http.client:send: b'GET /CompVis/stable-diffusion/main/configs/stable-diffusion/v1-inference.yaml HTTP/1.1\r\nHost: raw.githubusercontent.com\r\nUser-Agent: python-requests/2.31.0\r\nAccept-Encoding: gzip, deflate, br, zstd\r\nAccept: */*\r\nConnection: keep-alive\r\n\r\n' DEBUG:http.client:reply: 'HTTP/1.1 200 OK\r\n' DEBUG:http.client:header: Connection: keep-alive DEBUG:http.client:header: Content-Length: 775 DEBUG:http.client:header: Cache-Control: max-age=300 DEBUG:http.client:header: Content-Security-Policy: default-src 'none'; style-src 'unsafe-inline'; sandbox DEBUG:http.client:header: Content-Type: text/plain; charset=utf-8 DEBUG:http.client:header: ETag: W/"ee35c7ff75c5a3446ebe091268fc05096716edde5cbb0510410c11d006b8b99c" DEBUG:http.client:header: Strict-Transport-Security: max-age=31536000 DEBUG:http.client:header: X-Content-Type-Options: nosniff DEBUG:http.client:header: X-Frame-Options: deny DEBUG:http.client:header: X-XSS-Protection: 1; mode=block DEBUG:http.client:header: X-GitHub-Request-Id: A342:9A7F:17492AE:185C4CA:65627A76 DEBUG:http.client:header: Content-Encoding: gzip DEBUG:http.client:header: Accept-Ranges: bytes DEBUG:http.client:header: Date: Sat, 25 Nov 2023 22:55:28 GMT DEBUG:http.client:header: Via: 1.1 varnish DEBUG:http.client:header: X-Served-By: cache-lin2290024-LIN DEBUG:http.client:header: X-Cache: HIT DEBUG:http.client:header: X-Cache-Hits: 1 DEBUG:http.client:header: X-Timer: S1700952929.835787,VS0,VE1 DEBUG:http.client:header: Vary: Authorization,Accept-Encoding,Origin DEBUG:http.client:header: Access-Control-Allow-Origin: * DEBUG:http.client:header: Cross-Origin-Resource-Policy: cross-origin DEBUG:http.client:header: X-Fastly-Request-ID: ea4a6d08848eca0e618126e0ecd997afa2310fad DEBUG:http.client:header: Expires: Sat, 25 Nov 2023 23:00:28 GMT DEBUG:http.client:header: Source-Age: 234 DEBUG:urllib3.connectionpool:https://raw.githubusercontent.com:443 "GET /CompVis/stable-diffusion/main/configs/stable-diffusion/v1-inference.yaml HTTP/1.1" 200 775 INFO:enfugue:Checkpoint has 4 input channels DEBUG:enfugue:Instruction 1 beginning task “Loading UNet” In this conversion only the non-EMA weights are extracted. If you want to instead extract the EMA weights (usually better for inference), please make sure to add the `--extract_ema` flag. DEBUG:enfugue:Loading 686 keys into UNet state dict (non-strict) DEBUG:enfugue:Instruction 1 beginning task “Loading Default VAE” DEBUG:enfugue:Loading 248 keys into Autoencoder state dict (strict). Autoencoder scale is 0.18215 DEBUG:enfugue:Instruction 1 beginning task “Loading preview VAE madebyollin/taesd” DEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): huggingface.co:443 DEBUG:http.client:send: b'HEAD /madebyollin/taesd/resolve/main/config.json HTTP/1.1\r\nHost: huggingface.co\r\nuser-agent: unknown/None; hf_hub/0.19.4; python/3.10.11; torch/2.1.1+cu118\r\nAccept-Encoding: identity\r\nAccept: */*\r\nConnection: keep-alive\r\nX-Amzn-Trace-Id: cb83618f-fd18-4707-9d8d-8cb8967b484d\r\n\r\n' DEBUG:http.client:reply: 'HTTP/1.1 200 OK\r\n' DEBUG:http.client:header: Content-Type: text/plain; charset=utf-8 DEBUG:http.client:header: Content-Length: 551 DEBUG:http.client:header: Connection: keep-alive DEBUG:http.client:header: Date: Sat, 25 Nov 2023 22:55:33 GMT DEBUG:http.client:header: X-Powered-By: huggingface-moon DEBUG:http.client:header: X-Request-Id: Root=1-65627b65-319f61d964f2358122907150;cb83618f-fd18-4707-9d8d-8cb8967b484d DEBUG:http.client:header: Access-Control-Allow-Origin: https://huggingface.co DEBUG:http.client:header: Vary: Origin DEBUG:http.client:header: Access-Control-Expose-Headers: X-Repo-Commit,X-Request-Id,X-Error-Code,X-Error-Message,ETag,Link,Accept-Ranges,Content-Range DEBUG:http.client:header: X-Robots-Tag: none DEBUG:http.client:header: X-Repo-Commit: 73b7d1c8836d16997316fb94c4b36a9095442c1d DEBUG:http.client:header: Accept-Ranges: bytes DEBUG:http.client:header: Content-Security-Policy: default-src none; sandbox DEBUG:http.client:header: ETag: "62f01c3eb447730d88eb61aca75331a564301e5a" DEBUG:http.client:header: X-Cache: Miss from cloudfront DEBUG:http.client:header: Via: 1.1 bd0862f3780a5b6b00eb4b20a831751c.cloudfront.net (CloudFront) DEBUG:http.client:header: X-Amz-Cf-Pop: FCO50-C2 DEBUG:http.client:header: X-Amz-Cf-Id: OsWMfYY09XXeD4YtVes0uJi8OWCeptmzZWBYq5W0dbxIUv8ihFhMHA== DEBUG:urllib3.connectionpool:https://huggingface.co:443 "HEAD /madebyollin/taesd/resolve/main/config.json HTTP/1.1" 200 0 DEBUG:filelock:Attempting to acquire lock 3019213851136 on C:\Users\ukdic\.cache\enfugue\cache\.locks\models--madebyollin--taesd\62f01c3eb447730d88eb61aca75331a564301e5a.lock DEBUG:filelock:Lock 3019213851136 acquired on C:\Users\ukdic\.cache\enfugue\cache\.locks\models--madebyollin--taesd\62f01c3eb447730d88eb61aca75331a564301e5a.lock DEBUG:enfugue:Instruction 1 beginning task “Downloading https://huggingface.co/madebyollin/taesd/resolve/main/config.json” DEBUG:http.client:send: b'GET /madebyollin/taesd/resolve/main/config.json HTTP/1.1\r\nHost: huggingface.co\r\nuser-agent: unknown/None; hf_hub/0.19.4; python/3.10.11; torch/2.1.1+cu118; diffusers/0.24.0.dev0; session_id/57310c50b2ab4df9b8ff81657a422b6f; jax/0.3.25; flax/0.5.3; file_type/config; framework/pytorch\r\nAccept-Encoding: gzip, deflate, br, zstd\r\nAccept: */*\r\nConnection: keep-alive\r\nX-Amzn-Trace-Id: 7279866a-166a-4828-a447-573a5cbd8401\r\n\r\n' DEBUG:http.client:reply: 'HTTP/1.1 200 OK\r\n' DEBUG:http.client:header: Content-Type: text/plain; charset=utf-8 DEBUG:http.client:header: Content-Length: 551 DEBUG:http.client:header: Connection: keep-alive DEBUG:http.client:header: Date: Sat, 25 Nov 2023 22:55:33 GMT DEBUG:http.client:header: X-Powered-By: huggingface-moon DEBUG:http.client:header: X-Request-Id: Root=1-65627b65-7fdd33141aa815e1224c772b;7279866a-166a-4828-a447-573a5cbd8401 DEBUG:http.client:header: Access-Control-Allow-Origin: https://huggingface.co DEBUG:http.client:header: Vary: Origin DEBUG:http.client:header: Access-Control-Expose-Headers: X-Repo-Commit,X-Request-Id,X-Error-Code,X-Error-Message,ETag,Link,Accept-Ranges,Content-Range DEBUG:http.client:header: X-Robots-Tag: none DEBUG:http.client:header: X-Repo-Commit: 73b7d1c8836d16997316fb94c4b36a9095442c1d DEBUG:http.client:header: Accept-Ranges: bytes DEBUG:http.client:header: Content-Disposition: inline; filename*=UTF-8''config.json; filename="config.json"; DEBUG:http.client:header: Content-Security-Policy: default-src none; sandbox DEBUG:http.client:header: ETag: "62f01c3eb447730d88eb61aca75331a564301e5a" DEBUG:http.client:header: X-Cache: Miss from cloudfront DEBUG:http.client:header: Via: 1.1 bd0862f3780a5b6b00eb4b20a831751c.cloudfront.net (CloudFront) DEBUG:http.client:header: X-Amz-Cf-Pop: FCO50-C2 DEBUG:http.client:header: X-Amz-Cf-Id: ojGDg00jAq5wf6SMXl2NOHqwUrbQtl-waD8ReG_ESDZX6T9KwRHHfw== DEBUG:urllib3.connectionpool:https://huggingface.co:443 "GET /madebyollin/taesd/resolve/main/config.json HTTP/1.1" 200 551 config.json: 0%| | 0.00/551 [00:00
2023-11-25 23:58:00 enfugue INFO    Reached maximum idle time after 15.0 seconds, exiting engine process
2023-11-25 23:57:08 enfugue DEBUG   Traceback (most recent call last): File "enfugue\diffusion\process.py", line 185, in run response["result"] = self.handle(instruction_id, instruction_action, instruction_payload) File "enfugue\diffusion\process.py", line 250, in handle return self.execute_diffusion_plan( File "enfugue\diffusion\process.py", line 284, in execute_diffusion_plan return plan.execute( File "enfugue\diffusion\invocation.py", line 1256, in execute images, nsfw = self.execute_inference( File "enfugue\diffusion\invocation.py", line 1439, in execute_inference result = pipeline( File "enfugue\diffusion\manager.py", line 4620, in __call__ result = pipe( # type: ignore[assignment] File "torch\utils\_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "enfugue\diffusion\pipeline.py", line 3495, in __call__ these_prompt_embeds = self.encode_prompt( File "enfugue\diffusion\pipeline.py", line 1028, in encode_prompt prompt_embeds = text_encoder( File "torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "transformers\models\clip\modeling_clip.py", line 840, in forward return self.text_model( File "torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "transformers\models\clip\modeling_clip.py", line 745, in forward encoder_outputs = self.encoder( File "torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "transformers\models\clip\modeling_clip.py", line 656, in forward layer_outputs = encoder_layer( File "torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "transformers\models\clip\modeling_clip.py", line 385, in forward hidden_states = self.layer_norm1(hidden_states) File "torch\nn\modules\module.py", line 1518, in _wrapped_call_impl return self._call_impl(*args, **kwargs) File "torch\nn\modules\module.py", line 1527, in _call_impl return forward_call(*args, **kwargs) File "torch\nn\modules\normalization.py", line 196, in forward return F.layer_norm( File "torch\nn\functional.py", line 2543, in layer_norm return torch.layer_norm(input, normalized_shape, weight, bias, eps, torch.backends.cudnn.enabled) RuntimeError: "LayerNormKernelImpl" not implemented for 'Half'
2023-11-25 23:57:08 enfugue ERROR   Received error builtins.RuntimeError: "LayerNormKernelImpl" not implemented for 'Half'
2023-11-25 23:57:08 enfugue DEBUG   Deleting pipeline.
2023-11-25 23:57:08 enfugue DEBUG   Unloading pipeline for reason "invocation error"
2023-11-25 23:57:08 enfugue INFO    IP adapter None
2023-11-25 23:57:08 enfugue DEBUG   Calculated overall steps to be 41 - 0 image prompt embedding probe(s) + [1 chunk(s) * (0 encoding step(s) + (1 decoding step(s) * 1 frame(s)) + (1 temporal chunk(s) * 40 inference step(s))]
2023-11-25 23:57:08 enfugue DEBUG   Calling pipeline with arguments {'latent_callback': 'function', 'width': '3840', 'height': '2160', 'tile': '(False, False)', 'num_results_per_prompt': '1', 'tiling_stride': '0', 'prompts': '[Prompt]', 'progress_callback': 'function', 'latent_callback_steps': '5', 'latent_callback_type': 'pil'}
2023-11-25 23:57:08 enfugue DEBUG   Instruction 9 beginning task “Executing Inference”
2023-11-25 23:57:07 urllib3.connectionpool  DEBUG   https://huggingface.co:443 "HEAD /openai/clip-vit-large-patch14/resolve/main/vocab.json HTTP/1.1" 200 0
2023-11-25 23:57:07 urllib3.connectionpool  DEBUG   https://huggingface.co:443 "HEAD /openai/clip-vit-large-patch14/resolve/main/vocab.json HTTP/1.1" 200 0
2023-11-25 23:57:07 enfugue DEBUG   Instruction 9 beginning task “Loading tokenizer openai/clip-vit-large-patch14”
2023-11-25 23:57:07 urllib3.connectionpool  DEBUG   https://huggingface.co:443 "HEAD /openai/clip-vit-large-patch14/resolve/main/config.json HTTP/1.1" 200 0
2023-11-25 23:57:07 urllib3.connectionpool  DEBUG   https://huggingface.co:443 "HEAD /openai/clip-vit-large-patch14/resolve/main/config.json HTTP/1.1" 200 0
2023-11-25 23:57:07 urllib3.connectionpool  DEBUG   https://huggingface.co:443 "HEAD /CompVis/stable-diffusion-safety-checker/resolve/main/preprocessor_config.json HTTP/1.1" 200 0
2023-11-25 23:57:07 urllib3.connectionpool  DEBUG   https://huggingface.co:443 "HEAD /CompVis/stable-diffusion-safety-checker/resolve/main/preprocessor_config.json HTTP/1.1" 200 0
2023-11-25 23:57:07 enfugue DEBUG   Instruction 9 beginning task “Initializing feature extractor from repository CompVis/stable-diffusion-safety-checker”
2023-11-25 23:57:06 urllib3.connectionpool  DEBUG   https://huggingface.co:443 "HEAD /CompVis/stable-diffusion-safety-checker/resolve/main/config.json HTTP/1.1" 200 0
2023-11-25 23:57:06 urllib3.connectionpool  DEBUG   https://huggingface.co:443 "HEAD /CompVis/stable-diffusion-safety-checker/resolve/main/config.json HTTP/1.1" 200 0
2023-11-25 23:57:05 enfugue DEBUG   Instruction 9 beginning task “Loading safety checker CompVis/stable-diffusion-safety-checker”
2023-11-25 23:57:05 urllib3.connectionpool  DEBUG   https://huggingface.co:443 "HEAD /madebyollin/taesd/resolve/main/config.json HTTP/1.1" 200 0
2023-11-25 23:57:05 urllib3.connectionpool  DEBUG   https://huggingface.co:443 "HEAD /madebyollin/taesd/resolve/main/config.json HTTP/1.1" 200 0