painebenjamin / app.enfugue.ai

ENFUGUE is an open-source web app for making studio-grade images and video using generative AI.
GNU General Public License v3.0
679 stars 64 forks source link

ROCm Support Needed #28

Open lennartbrandin opened 1 year ago

lennartbrandin commented 1 year ago

Issue

On clicking generate it loads and times out after a while.

Expected behaviour

It generates a picture using the prompt

Details

Enfugue v0.1.2 (Linux) Installed using archive method

Engine Logs

$ tail -f ../.cache/enfugue-engine.log 
2023-06-30 12:46:43,276 [enfugue] DEBUG (process.py:315) Received instruction 1, action plan
2023-06-30 12:46:44,440 [urllib3.connectionpool] DEBUG (connectionpool.py:1003) Starting new HTTPS connection (1): huggingface.co:443
2023-06-30 12:46:44,499 [http.client] DEBUG (log.py:118) send: b'HEAD /runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned.ckpt HTTP/1.1\r\nHost: huggingface.co\r\nUser-Agent: python-requests/2.30.0\r\nAccept-Encoding: gzip, deflate, br\r\nAccept: */*\r\nConnection: keep-alive\r\n\r\n'
2023-06-30 12:46:44,823 [http.client] DEBUG (log.py:118) reply: 'HTTP/1.1 302 Found\r\n'
2023-06-30 12:46:44,823 [http.client] DEBUG (log.py:118) header: Content-Type: text/plain; charset=utf-8
2023-06-30 12:46:44,823 [http.client] DEBUG (log.py:118) header: Content-Length: 1129
2023-06-30 12:46:44,823 [http.client] DEBUG (log.py:118) header: Connection: keep-alive
2023-06-30 12:46:44,823 [http.client] DEBUG (log.py:118) header: Date: Fri, 30 Jun 2023 10:46:44 GMT
2023-06-30 12:46:44,823 [http.client] DEBUG (log.py:118) header: X-Powered-By: huggingface-moon
2023-06-30 12:46:44,823 [http.client] DEBUG (log.py:118) header: X-Request-Id: Root=1-649eb294-2707d14c7d5d6c591b57f687
2023-06-30 12:46:44,824 [http.client] DEBUG (log.py:118) header: Access-Control-Allow-Origin: https://huggingface.co
2023-06-30 12:46:44,824 [http.client] DEBUG (log.py:118) header: Vary: Origin, Accept
2023-06-30 12:46:44,824 [http.client] DEBUG (log.py:118) header: Access-Control-Expose-Headers: X-Repo-Commit,X-Request-Id,X-Error-Code,X-Error-Message,ETag,Link,Accept-Ranges,Content-Range
2023-06-30 12:46:44,824 [http.client] DEBUG (log.py:118) header: X-Repo-Commit: aa9ba505e1973ae5cd05f5aedd345178f52f8e6a
2023-06-30 12:46:44,824 [http.client] DEBUG (log.py:118) header: Accept-Ranges: bytes
2023-06-30 12:46:44,824 [http.client] DEBUG (log.py:118) header: X-Linked-Size: 7703807346
2023-06-30 12:46:44,824 [http.client] DEBUG (log.py:118) header: X-Linked-ETag: "e1441589a6f3c5a53f5f54d0975a18a7feb7cdf0b0dee276dfc3331ae376a053"
2023-06-30 12:46:44,824 [http.client] DEBUG (log.py:118) header: Location: https://cdn-lfs.huggingface.co/repos/6b/20/6b201da5f0f5c60524535ebb7deac2eef68605655d3bbacfee9cce0087f3b3f5/e1441589a6f3c5a53f5f54d0975a18a7feb7cdf0b0dee276dfc3331ae376a053?response-content-disposition=attachment%3B+filename*%3DUTF-8%27%27v1-5-pruned.ckpt%3B+filename%3D%22v1-5-pruned.ckpt%22%3B&Expires=1688381205&Policy=eyJTdGF0ZW1lbnQiOlt7IlJlc291cmNlIjoiaHR0cHM6Ly9jZG4tbGZzLmh1Z2dpbmdmYWNlLmNvL3JlcG9zLzZiLzIwLzZiMjAxZGE1ZjBmNWM2MDUyNDUzNWViYjdkZWFjMmVlZjY4NjA1NjU1ZDNiYmFjZmVlOWNjZTAwODdmM2IzZjUvZTE0NDE1ODlhNmYzYzVhNTNmNWY1NGQwOTc1YTE4YTdmZWI3Y2RmMGIwZGVlMjc2ZGZjMzMzMWFlMzc2YTA1Mz9yZXNwb25zZS1jb250ZW50LWRpc3Bvc2l0aW9uPSoiLCJDb25kaXRpb24iOnsiRGF0ZUxlc3NUaGFuIjp7IkFXUzpFcG9jaFRpbWUiOjE2ODgzODEyMDV9fX1dfQ__&Signature=R700%7Ec4gBFjk6HhFAIxjwIUkFO0iVdNcwJH3EJAcYaNFW2f4VAGkOST-3Em2fAjd41hd1zz3PLI4L%7EDaXhcqQJCY15xQdkCouVGz7SsEmovXRJv9a-4Xclc58H2S1jkvd8IZiR69dX0MnRaJcuYmSFDrGa0mLSot8ESy1skNNq6DdZo295aMRvK134gdYNaLXxJEjv%7E1GuTo8ABg2jUPn73sWS9pkNHUqDOGcuqfPtbfOaJw3bmQUnYMa6jiWOPf1Tk95JZNpQwrs3NK06YPTMYCF5Syfp5YSFI07BpmIv%7EGqq-L0XHsZXpTLF2JYt9A%7E%7EIPlpWwuIMAjOfg9LnPdw__&Key-Pair-Id=KVTP0A1DKRTAX
2023-06-30 12:46:44,824 [http.client] DEBUG (log.py:118) header: X-Cache: Miss from cloudfront
2023-06-30 12:46:44,824 [http.client] DEBUG (log.py:118) header: Via: 1.1 cfd67353680316557643ad146b46d046.cloudfront.net (CloudFront)
2023-06-30 12:46:44,824 [http.client] DEBUG (log.py:118) header: X-Amz-Cf-Pop: HAM50-C1
2023-06-30 12:46:44,824 [http.client] DEBUG (log.py:118) header: X-Amz-Cf-Id: gNAd_wCBeGd4og3NlQaxdH9niQilqMdc6GeujUlXfJw2v097KyCO1A==
2023-06-30 12:46:44,824 [urllib3.connectionpool] DEBUG (connectionpool.py:456) https://huggingface.co:443 "HEAD /runwayml/stable-diffusion-v1-5/resolve/main/v1-5-pruned.ckpt HTTP/1.1" 302 0
2023-06-30 12:46:44,826 [urllib3.connectionpool] DEBUG (connectionpool.py:1003) Starting new HTTPS connection (1): cdn-lfs.huggingface.co:443
2023-06-30 12:46:44,869 [http.client] DEBUG (log.py:118) send: b'HEAD /repos/6b/20/6b201da5f0f5c60524535ebb7deac2eef68605655d3bbacfee9cce0087f3b3f5/e1441589a6f3c5a53f5f54d0975a18a7feb7cdf0b0dee276dfc3331ae376a053?response-content-disposition=attachment%3B+filename*%3DUTF-8%27%27v1-5-pruned.ckpt%3B+filename%3D%22v1-5-pruned.ckpt%22%3B&Expires=1688381205&Policy=eyJTdGF0ZW1lbnQiOlt7IlJlc291cmNlIjoiaHR0cHM6Ly9jZG4tbGZzLmh1Z2dpbmdmYWNlLmNvL3JlcG9zLzZiLzIwLzZiMjAxZGE1ZjBmNWM2MDUyNDUzNWViYjdkZWFjMmVlZjY4NjA1NjU1ZDNiYmFjZmVlOWNjZTAwODdmM2IzZjUvZTE0NDE1ODlhNmYzYzVhNTNmNWY1NGQwOTc1YTE4YTdmZWI3Y2RmMGIwZGVlMjc2ZGZjMzMzMWFlMzc2YTA1Mz9yZXNwb25zZS1jb250ZW50LWRpc3Bvc2l0aW9uPSoiLCJDb25kaXRpb24iOnsiRGF0ZUxlc3NUaGFuIjp7IkFXUzpFcG9jaFRpbWUiOjE2ODgzODEyMDV9fX1dfQ__&Signature=R700~c4gBFjk6HhFAIxjwIUkFO0iVdNcwJH3EJAcYaNFW2f4VAGkOST-3Em2fAjd41hd1zz3PLI4L~DaXhcqQJCY15xQdkCouVGz7SsEmovXRJv9a-4Xclc58H2S1jkvd8IZiR69dX0MnRaJcuYmSFDrGa0mLSot8ESy1skNNq6DdZo295aMRvK134gdYNaLXxJEjv~1GuTo8ABg2jUPn73sWS9pkNHUqDOGcuqfPtbfOaJw3bmQUnYMa6jiWOPf1Tk95JZNpQwrs3NK06YPTMYCF5Syfp5YSFI07BpmIv~Gqq-L0XHsZXpTLF2JYt9A~~IPlpWwuIMAjOfg9LnPdw__&Key-Pair-Id=KVTP0A1DKRTAX HTTP/1.1\r\nHost: cdn-lfs.huggingface.co\r\nUser-Agent: python-requests/2.30.0\r\nAccept-Encoding: gzip, deflate, br\r\nAccept: */*\r\nConnection: keep-alive\r\n\r\n'
2023-06-30 12:46:44,883 [http.client] DEBUG (log.py:118) reply: 'HTTP/1.1 200 OK\r\n'
2023-06-30 12:46:44,884 [http.client] DEBUG (log.py:118) header: Content-Type: binary/octet-stream
2023-06-30 12:46:44,884 [http.client] DEBUG (log.py:118) header: Content-Length: 7703807346
2023-06-30 12:46:44,884 [http.client] DEBUG (log.py:118) header: Connection: keep-alive
2023-06-30 12:46:44,884 [http.client] DEBUG (log.py:118) header: Last-Modified: Thu, 20 Oct 2022 12:04:48 GMT
2023-06-30 12:46:44,884 [http.client] DEBUG (log.py:118) header: x-amz-storage-class: INTELLIGENT_TIERING
2023-06-30 12:46:44,884 [http.client] DEBUG (log.py:118) header: x-amz-server-side-encryption: AES256
2023-06-30 12:46:44,884 [http.client] DEBUG (log.py:118) header: x-amz-version-id: BFBjjeCwpKzphP69jHCsu0tXSVXyZiD0
2023-06-30 12:46:44,884 [http.client] DEBUG (log.py:118) header: Content-Disposition: attachment; filename*=UTF-8''v1-5-pruned.ckpt; filename="v1-5-pruned.ckpt";
2023-06-30 12:46:44,884 [http.client] DEBUG (log.py:118) header: Accept-Ranges: bytes
2023-06-30 12:46:44,884 [http.client] DEBUG (log.py:118) header: Server: AmazonS3
2023-06-30 12:46:44,884 [http.client] DEBUG (log.py:118) header: Date: Thu, 29 Jun 2023 15:06:04 GMT
2023-06-30 12:46:44,884 [http.client] DEBUG (log.py:118) header: ETag: "37c7380e5122b52e5a82912076eff236-2"
2023-06-30 12:46:44,884 [http.client] DEBUG (log.py:118) header: X-Cache: Hit from cloudfront
2023-06-30 12:46:44,884 [http.client] DEBUG (log.py:118) header: Via: 1.1 1599881f4fb8a11206232254d6f4ccb6.cloudfront.net (CloudFront)
2023-06-30 12:46:44,885 [http.client] DEBUG (log.py:118) header: X-Amz-Cf-Pop: HAM50-P1
2023-06-30 12:46:44,885 [http.client] DEBUG (log.py:118) header: X-Amz-Cf-Id: 6UWW96thEzzke2v2ep8WHSu0eut4ecIL2Y5CCW0lKKNkSX63ysHuJw==
2023-06-30 12:46:44,885 [http.client] DEBUG (log.py:118) header: Age: 70841
2023-06-30 12:46:44,885 [http.client] DEBUG (log.py:118) header: Vary: Origin
2023-06-30 12:46:44,885 [urllib3.connectionpool] DEBUG (connectionpool.py:456) https://cdn-lfs.huggingface.co:443 "HEAD /repos/6b/20/6b201da5f0f5c60524535ebb7deac2eef68605655d3bbacfee9cce0087f3b3f5/e1441589a6f3c5a53f5f54d0975a18a7feb7cdf0b0dee276dfc3331ae376a053?response-content-disposition=attachment%3B+filename*%3DUTF-8%27%27v1-5-pruned.ckpt%3B+filename%3D%22v1-5-pruned.ckpt%22%3B&Expires=1688381205&Policy=eyJTdGF0ZW1lbnQiOlt7IlJlc291cmNlIjoiaHR0cHM6Ly9jZG4tbGZzLmh1Z2dpbmdmYWNlLmNvL3JlcG9zLzZiLzIwLzZiMjAxZGE1ZjBmNWM2MDUyNDUzNWViYjdkZWFjMmVlZjY4NjA1NjU1ZDNiYmFjZmVlOWNjZTAwODdmM2IzZjUvZTE0NDE1ODlhNmYzYzVhNTNmNWY1NGQwOTc1YTE4YTdmZWI3Y2RmMGIwZGVlMjc2ZGZjMzMzMWFlMzc2YTA1Mz9yZXNwb25zZS1jb250ZW50LWRpc3Bvc2l0aW9uPSoiLCJDb25kaXRpb24iOnsiRGF0ZUxlc3NUaGFuIjp7IkFXUzpFcG9jaFRpbWUiOjE2ODgzODEyMDV9fX1dfQ__&Signature=R700~c4gBFjk6HhFAIxjwIUkFO0iVdNcwJH3EJAcYaNFW2f4VAGkOST-3Em2fAjd41hd1zz3PLI4L~DaXhcqQJCY15xQdkCouVGz7SsEmovXRJv9a-4Xclc58H2S1jkvd8IZiR69dX0MnRaJcuYmSFDrGa0mLSot8ESy1skNNq6DdZo295aMRvK134gdYNaLXxJEjv~1GuTo8ABg2jUPn73sWS9pkNHUqDOGcuqfPtbfOaJw3bmQUnYMa6jiWOPf1Tk95JZNpQwrs3NK06YPTMYCF5Syfp5YSFI07BpmIv~Gqq-L0XHsZXpTLF2JYt9A~~IPlpWwuIMAjOfg9LnPdw__&Key-Pair-Id=KVTP0A1DKRTAX HTTP/1.1" 200 0
2023-06-30 12:46:44,944 [enfugue] DEBUG (manager.py:1145) Calling pipeline with arguments {'latent_callback': <function DiffusionPlan.execute_nodes.<locals>.node_image_callback at 0x7f960477d2d0>, 'width': 512, 'height': 512, 'chunking_size': 64, 'chunking_blur': 64, 'num_images_per_prompt': 1, 'progress_callback': <function DiffusionEngineProcess.create_progress_callback.<locals>.callback at 0x7f9615f53a30>, 'latent_callback_steps': 10, 'latent_callback_type': 'pil', 'prompt': 'Cat', 'negative_prompt': '', 'image': None, 'control_image': None, 'conditioning_scale': 1.0, 'strength': 0.8, 'num_inference_steps': 50, 'guidance_scale': 7.5}
2023-06-30 12:46:44,944 [enfugue] DEBUG (manager.py:711) Inferencing on CPU, using BFloat
2023-06-30 12:46:45,211 [enfugue] DEBUG (manager.py:970) Initializing pipeline from checkpoint at /home/lennart/.cache/enfugue/checkpoint/v1-5-pruned.ckpt. Arguments are {'cache_dir': '/home/lennart/.cache/enfugue/cache', 'engine_size': 512, 'chunking_size': 64, 'requires_safety_checker': False, 'controlnet': None, 'torch_dtype': torch.float32, 'load_safety_checker': False}
2023-06-30 12:47:01,341 [torch.distributed.nn.jit.instantiator] INFO (instantiator.py:21) Created a temporary directory at /tmp/tmpy4z3612b
2023-06-30 12:47:01,343 [torch.distributed.nn.jit.instantiator] INFO (instantiator.py:76) Writing /tmp/tmpy4z3612b/_remote_module_non_scriptable.py
2023-06-30 12:47:14,043 [urllib3.connectionpool] DEBUG (connectionpool.py:1003) Starting new HTTPS connection (1): huggingface.co:443
2023-06-30 12:47:14,095 [http.client] DEBUG (log.py:118) send: b'HEAD /openai/clip-vit-large-patch14/resolve/main/config.json HTTP/1.1\r\nHost: huggingface.co\r\nuser-agent: unknown/None; hf_hub/0.14.1; python/3.10.9; torch/1.13.1+cu117\r\nAccept-Encoding: identity\r\nAccept: */*\r\nConnection: keep-alive\r\n\r\n'
2023-06-30 12:47:14,411 [http.client] DEBUG (log.py:118) reply: 'HTTP/1.1 200 OK\r\n'
2023-06-30 12:47:14,412 [http.client] DEBUG (log.py:118) header: Content-Type: text/plain; charset=utf-8
2023-06-30 12:47:14,412 [http.client] DEBUG (log.py:118) header: Content-Length: 4519
2023-06-30 12:47:14,412 [http.client] DEBUG (log.py:118) header: Connection: keep-alive
2023-06-30 12:47:14,413 [http.client] DEBUG (log.py:118) header: Date: Fri, 30 Jun 2023 10:47:14 GMT
2023-06-30 12:47:14,413 [http.client] DEBUG (log.py:118) header: X-Powered-By: huggingface-moon
2023-06-30 12:47:14,413 [http.client] DEBUG (log.py:118) header: X-Request-Id: Root=1-649eb2b2-674a2f6c559f85cc44a0c399
2023-06-30 12:47:14,413 [http.client] DEBUG (log.py:118) header: Access-Control-Allow-Origin: https://huggingface.co
2023-06-30 12:47:14,413 [http.client] DEBUG (log.py:118) header: Vary: Origin
2023-06-30 12:47:14,413 [http.client] DEBUG (log.py:118) header: Access-Control-Expose-Headers: X-Repo-Commit,X-Request-Id,X-Error-Code,X-Error-Message,ETag,Link,Accept-Ranges,Content-Range
2023-06-30 12:47:14,414 [http.client] DEBUG (log.py:118) header: X-Repo-Commit: 8d052a0f05efbaefbc9e8786ba291cfdf93e5bff
2023-06-30 12:47:14,414 [http.client] DEBUG (log.py:118) header: Accept-Ranges: bytes
2023-06-30 12:47:14,414 [http.client] DEBUG (log.py:118) header: Content-Security-Policy: default-src none; sandbox
2023-06-30 12:47:14,414 [http.client] DEBUG (log.py:118) header: ETag: "2c19f6666e0e163c7954df66cb901353fcad088e"
2023-06-30 12:47:14,414 [http.client] DEBUG (log.py:118) header: X-Cache: Miss from cloudfront
2023-06-30 12:47:14,414 [http.client] DEBUG (log.py:118) header: Via: 1.1 376388af58845ad0897ba599cce4d92e.cloudfront.net (CloudFront)
2023-06-30 12:47:14,415 [http.client] DEBUG (log.py:118) header: X-Amz-Cf-Pop: HAM50-C1
2023-06-30 12:47:14,415 [http.client] DEBUG (log.py:118) header: X-Amz-Cf-Id: Jjx2SGGnraFH1M6xA_pcw_D-vH8ekygTlTRogAFSsc2ymNmcAduLNQ==
2023-06-30 12:47:14,415 [urllib3.connectionpool] DEBUG (connectionpool.py:456) https://huggingface.co:443 "HEAD /openai/clip-vit-large-patch14/resolve/main/config.json HTTP/1.1" 200 0
2023-06-30 12:47:14,423 [http.client] DEBUG (log.py:118) send: b'HEAD /openai/clip-vit-large-patch14/resolve/main/model.safetensors HTTP/1.1\r\nHost: huggingface.co\r\nuser-agent: unknown/None; hf_hub/0.14.1; python/3.10.9; torch/1.13.1+cu117\r\nAccept-Encoding: identity\r\nAccept: */*\r\nConnection: keep-alive\r\n\r\n'
2023-06-30 12:47:14,747 [http.client] DEBUG (log.py:118) reply: 'HTTP/1.1 404 Not Found\r\n'
2023-06-30 12:47:14,748 [http.client] DEBUG (log.py:118) header: Content-Type: text/plain; charset=utf-8
2023-06-30 12:47:14,748 [http.client] DEBUG (log.py:118) header: Content-Length: 15
2023-06-30 12:47:14,748 [http.client] DEBUG (log.py:118) header: Connection: keep-alive
2023-06-30 12:47:14,748 [http.client] DEBUG (log.py:118) header: Date: Fri, 30 Jun 2023 10:47:14 GMT
2023-06-30 12:47:14,748 [http.client] DEBUG (log.py:118) header: X-Powered-By: huggingface-moon
2023-06-30 12:47:14,748 [http.client] DEBUG (log.py:118) header: X-Request-Id: Root=1-649eb2b2-6a0c791648a636810c0340b2
2023-06-30 12:47:14,749 [http.client] DEBUG (log.py:118) header: Access-Control-Allow-Origin: https://huggingface.co
2023-06-30 12:47:14,749 [http.client] DEBUG (log.py:118) header: Vary: Origin
2023-06-30 12:47:14,749 [http.client] DEBUG (log.py:118) header: Access-Control-Expose-Headers: X-Repo-Commit,X-Request-Id,X-Error-Code,X-Error-Message,ETag,Link,Accept-Ranges,Content-Range
2023-06-30 12:47:14,749 [http.client] DEBUG (log.py:118) header: X-Repo-Commit: 8d052a0f05efbaefbc9e8786ba291cfdf93e5bff
2023-06-30 12:47:14,749 [http.client] DEBUG (log.py:118) header: Accept-Ranges: bytes
2023-06-30 12:47:14,749 [http.client] DEBUG (log.py:118) header: X-Error-Code: EntryNotFound
2023-06-30 12:47:14,750 [http.client] DEBUG (log.py:118) header: X-Error-Message: Entry not found
2023-06-30 12:47:14,750 [http.client] DEBUG (log.py:118) header: ETag: W/"f-mY2VvLxuxB7KhsoOdQTlMTccuAQ"
2023-06-30 12:47:14,750 [http.client] DEBUG (log.py:118) header: X-Cache: Error from cloudfront
2023-06-30 12:47:14,750 [http.client] DEBUG (log.py:118) header: Via: 1.1 376388af58845ad0897ba599cce4d92e.cloudfront.net (CloudFront)
2023-06-30 12:47:14,750 [http.client] DEBUG (log.py:118) header: X-Amz-Cf-Pop: HAM50-C1
2023-06-30 12:47:14,750 [http.client] DEBUG (log.py:118) header: X-Amz-Cf-Id: EFN_qP9Z1N6gqlxzSjBDuBi4KpxAoGAGDbMa_qCBHn67Ut4wfNwaGA==
2023-06-30 12:47:14,751 [urllib3.connectionpool] DEBUG (connectionpool.py:456) https://huggingface.co:443 "HEAD /openai/clip-vit-large-patch14/resolve/main/model.safetensors HTTP/1.1" 404 0
2023-06-30 12:47:14,757 [http.client] DEBUG (log.py:118) send: b'HEAD /openai/clip-vit-large-patch14/resolve/main/model.safetensors.index.json HTTP/1.1\r\nHost: huggingface.co\r\nuser-agent: unknown/None; hf_hub/0.14.1; python/3.10.9; torch/1.13.1+cu117\r\nAccept-Encoding: identity\r\nAccept: */*\r\nConnection: keep-alive\r\n\r\n'
2023-06-30 12:47:15,078 [http.client] DEBUG (log.py:118) reply: 'HTTP/1.1 404 Not Found\r\n'
2023-06-30 12:47:15,078 [http.client] DEBUG (log.py:118) header: Content-Type: text/plain; charset=utf-8
2023-06-30 12:47:15,079 [http.client] DEBUG (log.py:118) header: Content-Length: 15
2023-06-30 12:47:15,079 [http.client] DEBUG (log.py:118) header: Connection: keep-alive
2023-06-30 12:47:15,079 [http.client] DEBUG (log.py:118) header: Date: Fri, 30 Jun 2023 10:47:15 GMT
2023-06-30 12:47:15,079 [http.client] DEBUG (log.py:118) header: X-Powered-By: huggingface-moon
2023-06-30 12:47:15,079 [http.client] DEBUG (log.py:118) header: X-Request-Id: Root=1-649eb2b3-6ac2ed6609ea91d41dd708d1
2023-06-30 12:47:15,079 [http.client] DEBUG (log.py:118) header: Access-Control-Allow-Origin: https://huggingface.co
2023-06-30 12:47:15,080 [http.client] DEBUG (log.py:118) header: Vary: Origin
2023-06-30 12:47:15,080 [http.client] DEBUG (log.py:118) header: Access-Control-Expose-Headers: X-Repo-Commit,X-Request-Id,X-Error-Code,X-Error-Message,ETag,Link,Accept-Ranges,Content-Range
2023-06-30 12:47:15,080 [http.client] DEBUG (log.py:118) header: X-Repo-Commit: 8d052a0f05efbaefbc9e8786ba291cfdf93e5bff
2023-06-30 12:47:15,080 [http.client] DEBUG (log.py:118) header: Accept-Ranges: bytes
2023-06-30 12:47:15,080 [http.client] DEBUG (log.py:118) header: X-Error-Code: EntryNotFound
2023-06-30 12:47:15,080 [http.client] DEBUG (log.py:118) header: X-Error-Message: Entry not found
2023-06-30 12:47:15,080 [http.client] DEBUG (log.py:118) header: ETag: W/"f-mY2VvLxuxB7KhsoOdQTlMTccuAQ"
2023-06-30 12:47:15,081 [http.client] DEBUG (log.py:118) header: X-Cache: Error from cloudfront
2023-06-30 12:47:15,081 [http.client] DEBUG (log.py:118) header: Via: 1.1 376388af58845ad0897ba599cce4d92e.cloudfront.net (CloudFront)
2023-06-30 12:47:15,081 [http.client] DEBUG (log.py:118) header: X-Amz-Cf-Pop: HAM50-C1
2023-06-30 12:47:15,081 [http.client] DEBUG (log.py:118) header: X-Amz-Cf-Id: XIGtN45F1Eb-AVrqr8YKnZ0c-K-ZVVe8VtQgeXrDOEWT_Ns3Sa6zoQ==
2023-06-30 12:47:15,081 [urllib3.connectionpool] DEBUG (connectionpool.py:456) https://huggingface.co:443 "HEAD /openai/clip-vit-large-patch14/resolve/main/model.safetensors.index.json HTTP/1.1" 404 0
2023-06-30 12:47:15,084 [http.client] DEBUG (log.py:118) send: b'HEAD /openai/clip-vit-large-patch14/resolve/main/pytorch_model.bin HTTP/1.1\r\nHost: huggingface.co\r\nuser-agent: unknown/None; hf_hub/0.14.1; python/3.10.9; torch/1.13.1+cu117\r\nAccept-Encoding: identity\r\nAccept: */*\r\nConnection: keep-alive\r\n\r\n'
2023-06-30 12:47:15,423 [http.client] DEBUG (log.py:118) reply: 'HTTP/1.1 302 Found\r\n'
2023-06-30 12:47:15,424 [http.client] DEBUG (log.py:118) header: Content-Type: text/plain; charset=utf-8
2023-06-30 12:47:15,424 [http.client] DEBUG (log.py:118) header: Content-Length: 1103
2023-06-30 12:47:15,424 [http.client] DEBUG (log.py:118) header: Connection: keep-alive
2023-06-30 12:47:15,425 [http.client] DEBUG (log.py:118) header: Date: Fri, 30 Jun 2023 10:47:15 GMT
2023-06-30 12:47:15,425 [http.client] DEBUG (log.py:118) header: X-Powered-By: huggingface-moon
2023-06-30 12:47:15,425 [http.client] DEBUG (log.py:118) header: X-Request-Id: Root=1-649eb2b3-68fcdb9846f2b3bb020e5c7d
2023-06-30 12:47:15,425 [http.client] DEBUG (log.py:118) header: Access-Control-Allow-Origin: https://huggingface.co
2023-06-30 12:47:15,425 [http.client] DEBUG (log.py:118) header: Vary: Origin, Accept
2023-06-30 12:47:15,426 [http.client] DEBUG (log.py:118) header: Access-Control-Expose-Headers: X-Repo-Commit,X-Request-Id,X-Error-Code,X-Error-Message,ETag,Link,Accept-Ranges,Content-Range
2023-06-30 12:47:15,426 [http.client] DEBUG (log.py:118) header: X-Repo-Commit: 8d052a0f05efbaefbc9e8786ba291cfdf93e5bff
2023-06-30 12:47:15,426 [http.client] DEBUG (log.py:118) header: Accept-Ranges: bytes
2023-06-30 12:47:15,426 [http.client] DEBUG (log.py:118) header: X-Linked-Size: 1710671599
2023-06-30 12:47:15,426 [http.client] DEBUG (log.py:118) header: X-Linked-ETag: "f1a17cdbe0f36fec524f5cafb1c261ea3bbbc13e346e0f74fc9eb0460dedd0d3"
2023-06-30 12:47:15,427 [http.client] DEBUG (log.py:118) header: Location: https://cdn-lfs.huggingface.co/openai/clip-vit-large-patch14/f1a17cdbe0f36fec524f5cafb1c261ea3bbbc13e346e0f74fc9eb0460dedd0d3?response-content-disposition=attachment%3B+filename*%3DUTF-8%27%27pytorch_model.bin%3B+filename%3D%22pytorch_model.bin%22%3B&response-content-type=application%2Foctet-stream&Expires=1688380725&Policy=eyJTdGF0ZW1lbnQiOlt7IlJlc291cmNlIjoiaHR0cHM6Ly9jZG4tbGZzLmh1Z2dpbmdmYWNlLmNvL29wZW5haS9jbGlwLXZpdC1sYXJnZS1wYXRjaDE0L2YxYTE3Y2RiZTBmMzZmZWM1MjRmNWNhZmIxYzI2MWVhM2JiYmMxM2UzNDZlMGY3NGZjOWViMDQ2MGRlZGQwZDM%7EcmVzcG9uc2UtY29udGVudC1kaXNwb3NpdGlvbj0qJnJlc3BvbnNlLWNvbnRlbnQtdHlwZT0qIiwiQ29uZGl0aW9uIjp7IkRhdGVMZXNzVGhhbiI6eyJBV1M6RXBvY2hUaW1lIjoxNjg4MzgwNzI1fX19XX0_&Signature=fQno3FVgyh7TBdorASWaHxHtW9wUCtxJIXVmtUqU%7E0Nlg1iNIPW9yfYWLj72m8hFgsxKSx9dO5xYCZd2gm9CeHbfYZVHh%7EpiSTaEu%7EJ2Vi55y47q86Vk5Nw4VF08q7lRZrixrKfht5o%7Eo14njjfYWBUMoExE482kW36fnoyCM%7E2-yu18kQg9injli8DWi8Svlo5jWCIofrwVDrzuKeBHDkFaWR1mshP6seFm2le%7Ezb-aNKBaijnanEglAsc6kzuLZDjAKD7tpSS6y5itM5PLw11lJTIbZMPuwWh3SMGX4SlvDLPJql0LuSaDY2B97Wo3Etihqdn1fEr8ATOUfNCkZA__&Key-Pair-Id=KVTP0A1DKRTAX
2023-06-30 12:47:15,427 [http.client] DEBUG (log.py:118) header: X-Cache: Miss from cloudfront
2023-06-30 12:47:15,427 [http.client] DEBUG (log.py:118) header: Via: 1.1 376388af58845ad0897ba599cce4d92e.cloudfront.net (CloudFront)
2023-06-30 12:47:15,427 [http.client] DEBUG (log.py:118) header: X-Amz-Cf-Pop: HAM50-C1
2023-06-30 12:47:15,427 [http.client] DEBUG (log.py:118) header: X-Amz-Cf-Id: w4ZGwf6qugQqqYvyuCkdRdP1cUVv9JdMZdBDiQNKy7dEZo0RuvKuCQ==
2023-06-30 12:47:15,427 [urllib3.connectionpool] DEBUG (connectionpool.py:456) https://huggingface.co:443 "HEAD /openai/clip-vit-large-patch14/resolve/main/pytorch_model.bin HTTP/1.1" 302 0
2023-06-30 12:47:22,091 [http.client] DEBUG (log.py:118) send: b'HEAD /openai/clip-vit-large-patch14/resolve/main/vocab.json HTTP/1.1\r\nHost: huggingface.co\r\nuser-agent: unknown/None; hf_hub/0.14.1; python/3.10.9; torch/1.13.1+cu117\r\nAccept-Encoding: identity\r\nAccept: */*\r\nConnection: keep-alive\r\n\r\n'
2023-06-30 12:47:22,206 [http.client] DEBUG (log.py:118) reply: 'HTTP/1.1 200 OK\r\n'
2023-06-30 12:47:22,207 [http.client] DEBUG (log.py:118) header: Content-Type: text/plain; charset=utf-8
2023-06-30 12:47:22,207 [http.client] DEBUG (log.py:118) header: Content-Length: 961143
2023-06-30 12:47:22,207 [http.client] DEBUG (log.py:118) header: Connection: keep-alive
2023-06-30 12:47:22,207 [http.client] DEBUG (log.py:118) header: Date: Fri, 30 Jun 2023 10:47:22 GMT
2023-06-30 12:47:22,207 [http.client] DEBUG (log.py:118) header: X-Powered-By: huggingface-moon
2023-06-30 12:47:22,207 [http.client] DEBUG (log.py:118) header: X-Request-Id: Root=1-649eb2ba-73b0294c693514851dd9b4e0
2023-06-30 12:47:22,208 [http.client] DEBUG (log.py:118) header: Access-Control-Allow-Origin: https://huggingface.co
2023-06-30 12:47:22,208 [http.client] DEBUG (log.py:118) header: Vary: Origin
2023-06-30 12:47:22,208 [http.client] DEBUG (log.py:118) header: Access-Control-Expose-Headers: X-Repo-Commit,X-Request-Id,X-Error-Code,X-Error-Message,ETag,Link,Accept-Ranges,Content-Range
2023-06-30 12:47:22,208 [http.client] DEBUG (log.py:118) header: X-Repo-Commit: 8d052a0f05efbaefbc9e8786ba291cfdf93e5bff
2023-06-30 12:47:22,208 [http.client] DEBUG (log.py:118) header: Accept-Ranges: bytes
2023-06-30 12:47:22,208 [http.client] DEBUG (log.py:118) header: Content-Security-Policy: default-src none; sandbox
2023-06-30 12:47:22,208 [http.client] DEBUG (log.py:118) header: ETag: "4297ea6a8d2bae1fea8f48b45e257814dcb11f69"
2023-06-30 12:47:22,209 [http.client] DEBUG (log.py:118) header: X-Cache: Miss from cloudfront
2023-06-30 12:47:22,209 [http.client] DEBUG (log.py:118) header: Via: 1.1 376388af58845ad0897ba599cce4d92e.cloudfront.net (CloudFront)
2023-06-30 12:47:22,209 [http.client] DEBUG (log.py:118) header: X-Amz-Cf-Pop: HAM50-C1
2023-06-30 12:47:22,209 [http.client] DEBUG (log.py:118) header: X-Amz-Cf-Id: zKFNmuRSrQ1Itme9PKxsfoQlU32MAcqZx8JuUAS4tXuhs27mjLs07w==
2023-06-30 12:47:22,209 [urllib3.connectionpool] DEBUG (connectionpool.py:456) https://huggingface.co:443 "HEAD /openai/clip-vit-large-patch14/resolve/main/vocab.json HTTP/1.1" 200 0
2023-06-30 12:47:44,235 [enfugue] DEBUG (pipeline.py:505) Creating random latents of shape (1, 4, 64, 64) and type torch.float32
2023-06-30 12:47:44,236 [enfugue] DEBUG (pipeline.py:917) Denoising image in 50 steps (unchunked)
2023-06-30 12:47:44,420 [enfugue] DEBUG (process.py:368) stdout: In this conversion only the non-EMA weights are extracted. If you want to instead extract the EMA weights (usually better for inference), please make sure to add the `--extract_ema` flag.

2023-06-30 12:47:44,420 [enfugue] ERROR (process.py:370) stderr: torchvision/io/image.py:13: UserWarning: Failed to load image Python extension: 
torch/_jit_internal.py:839: UserWarning: Unable to retrieve source for @torch.jit._overload function: <function _DenseLayer.forward at 0x7f96048f1bd0>.
  warnings.warn(
torch/_jit_internal.py:839: UserWarning: Unable to retrieve source for @torch.jit._overload function: <function _DenseLayer.forward at 0x7f96048f3e20>.
  warnings.warn(
Some weights of the model checkpoint at openai/clip-vit-large-patch14 were not used when initializing CLIPTextModel: ['vision_model.encoder.layers.12.mlp.fc2.weight', 'vision_model.encoder.layers.17.layer_norm2.weight', 'vision_model.encoder.layers.14.self_attn.out_proj.bias', 'vision_model.encoder.layers.12.self_attn.out_proj.weight', 'vision_model.encoder.layers.7.mlp.fc2.weight', 'vision_model.encoder.layers.2.layer_norm1.weight', 'vision_model.encoder.layers.0.self_attn.out_proj.weight', 'vision_model.encoder.layers.6.self_attn.q_proj.bias', 'vision_model.encoder.layers.2.layer_norm1.bias', 'vision_model.encoder.layers.9.self_attn.q_proj.weight', 'vision_model.encoder.layers.14.self_attn.v_proj.weight', 'vision_model.encoder.layers.23.self_attn.q_proj.weight', 'vision_model.encoder.layers.0.layer_norm1.bias', 'vision_model.encoder.layers.20.self_attn.out_proj.weight', 'vision_model.encoder.layers.10.layer_norm2.weight', 'vision_model.encoder.layers.12.self_attn.q_proj.bias', 'vision_model.encoder.layers.12.self_attn.out_proj.bias', 'vision_model.encoder.layers.23.mlp.fc2.bias', 'vision_model.encoder.layers.21.self_attn.k_proj.weight', 'vision_model.encoder.layers.9.self_attn.q_proj.bias', 'vision_model.encoder.layers.17.self_attn.out_proj.bias', 'vision_model.encoder.layers.13.self_attn.q_proj.bias', 'vision_model.encoder.layers.16.mlp.fc1.bias', 'vision_model.encoder.layers.9.self_attn.out_proj.weight', 'vision_model.encoder.layers.11.self_attn.k_proj.bias', 'vision_model.encoder.layers.7.self_attn.out_proj.bias', 'vision_model.encoder.layers.20.mlp.fc1.weight', 'vision_model.encoder.layers.15.self_attn.q_proj.bias', 'vision_model.encoder.layers.15.layer_norm2.weight', 'vision_model.encoder.layers.21.mlp.fc1.bias', 'vision_model.encoder.layers.15.self_attn.out_proj.bias', 'vision_model.encoder.layers.18.self_attn.q_proj.weight', 'vision_model.encoder.layers.0.self_attn.out_proj.bias', 'vision_model.encoder.layers.5.self_attn.k_proj.weight', 'vision_model.encoder.layers.1.self_attn.k_proj.bias', 'vision_model.encoder.layers.3.self_attn.out_proj.bias', 'vision_model.encoder.layers.21.layer_norm2.bias', 'vision_model.encoder.layers.1.layer_norm2.bias', 'vision_model.encoder.layers.16.mlp.fc1.weight', 'vision_model.encoder.layers.1.self_attn.out_proj.weight', 'vision_model.encoder.layers.2.mlp.fc1.bias', 'visual_projection.weight', 'vision_model.encoder.layers.4.self_attn.out_proj.weight', 'vision_model.encoder.layers.16.layer_norm1.bias', 'vision_model.embeddings.position_embedding.weight', 'vision_model.encoder.layers.12.layer_norm1.bias', 'vision_model.encoder.layers.13.self_attn.k_proj.weight', 'vision_model.encoder.layers.5.mlp.fc1.weight', 'vision_model.encoder.layers.13.mlp.fc2.weight', 'vision_model.encoder.layers.14.self_attn.q_proj.bias', 'vision_model.encoder.layers.15.self_attn.v_proj.bias', 'vision_model.encoder.layers.8.layer_norm1.bias', 'vision_model.encoder.layers.7.mlp.fc1.weight', 'vision_model.encoder.layers.15.layer_norm2.bias', 'vision_model.encoder.layers.6.self_attn.q_proj.weight', 'vision_model.encoder.layers.11.mlp.fc1.bias', 'vision_model.encoder.layers.0.self_attn.q_proj.bias', 'vision_model.pre_layrnorm.weight', 'vision_model.encoder.layers.11.layer_norm2.weight', 'vision_model.encoder.layers.0.self_attn.v_proj.weight', 'vision_model.encoder.layers.6.self_attn.k_proj.weight', 'vision_model.encoder.layers.21.layer_norm2.weight', 'vision_model.encoder.layers.4.self_attn.out_proj.bias', 'vision_model.encoder.layers.5.mlp.fc2.bias', 'vision_model.encoder.layers.8.self_attn.v_proj.bias', 'vision_model.encoder.layers.21.self_attn.q_proj.bias', 'vision_model.encoder.layers.7.self_attn.v_proj.weight', 'vision_model.encoder.layers.13.self_attn.out_proj.bias', 'vision_model.encoder.layers.21.self_attn.v_proj.weight', 'vision_model.encoder.layers.8.mlp.fc2.weight', 'vision_model.encoder.layers.14.layer_norm2.weight', 'vision_model.encoder.layers.0.layer_norm2.bias', 'vision_model.encoder.layers.23.layer_norm2.weight', 'vision_model.encoder.layers.8.self_attn.v_proj.weight', 'vision_model.encoder.layers.22.self_attn.q_proj.bias', 'vision_model.encoder.layers.18.self_attn.k_proj.bias', 'vision_model.encoder.layers.3.self_attn.q_proj.bias', 'vision_model.encoder.layers.13.layer_norm2.bias', 'vision_model.encoder.layers.11.self_attn.out_proj.weight', 'vision_model.encoder.layers.3.mlp.fc1.bias', 'vision_model.encoder.layers.1.layer_norm1.weight', 'vision_model.encoder.layers.11.self_attn.v_proj.bias', 'vision_model.encoder.layers.19.self_attn.out_proj.weight', 'vision_model.encoder.layers.9.layer_norm1.weight', 'vision_model.encoder.layers.19.mlp.fc2.weight', 'vision_model.encoder.layers.2.self_attn.out_proj.bias', 'vision_model.encoder.layers.21.layer_norm1.bias', 'vision_model.encoder.layers.15.self_attn.v_proj.weight', 'vision_model.encoder.layers.12.layer_norm2.bias', 'vision_model.encoder.layers.5.layer_norm1.bias', 'vision_model.post_layernorm.bias', 'vision_model.encoder.layers.11.mlp.fc2.weight', 'vision_model.encoder.layers.15.self_attn.out_proj.weight', 'vision_model.encoder.layers.14.mlp.fc1.bias', 'vision_model.encoder.layers.3.mlp.fc2.bias', 'vision_model.encoder.layers.1.self_attn.v_proj.weight', 'vision_model.encoder.layers.4.layer_norm1.weight', 'vision_model.encoder.layers.16.self_attn.q_proj.weight', 'vision_model.encoder.layers.10.self_attn.q_proj.bias', 'vision_model.encoder.layers.10.self_attn.k_proj.weight', 'vision_model.encoder.layers.21.mlp.fc2.bias', 'vision_model.encoder.layers.18.layer_norm1.weight', 'vision_model.embeddings.patch_embedding.weight', 'vision_model.encoder.layers.18.self_attn.v_proj.weight', 'vision_model.encoder.layers.8.self_attn.q_proj.weight', 'vision_model.encoder.layers.3.self_attn.k_proj.weight', 'vision_model.encoder.layers.17.mlp.fc1.weight', 'vision_model.encoder.layers.19.mlp.fc1.bias', 'vision_model.encoder.layers.3.self_attn.v_proj.bias', 'vision_model.encoder.layers.0.mlp.fc1.weight', 'vision_model.encoder.layers.19.self_attn.q_proj.bias', 'vision_model.encoder.layers.10.mlp.fc1.bias', 'vision_model.encoder.layers.21.self_attn.out_proj.weight', 'vision_model.encoder.layers.2.self_attn.k_proj.bias', 'vision_model.encoder.layers.23.self_attn.k_proj.weight', 'vision_model.encoder.layers.19.mlp.fc1.weight', 'vision_model.encoder.layers.20.self_attn.out_proj.bias', 'vision_model.encoder.layers.9.mlp.fc2.bias', 'vision_model.encoder.layers.20.mlp.fc1.bias', 'vision_model.encoder.layers.7.self_attn.q_proj.weight', 'vision_model.encoder.layers.10.self_attn.out_proj.weight', 'vision_model.encoder.layers.18.self_attn.q_proj.bias', 'vision_model.encoder.layers.7.mlp.fc1.bias', 'vision_model.encoder.layers.2.self_attn.q_proj.bias', 'vision_model.encoder.layers.3.self_attn.k_proj.bias', 'vision_model.encoder.layers.8.layer_norm1.weight', 'vision_model.encoder.layers.1.layer_norm1.bias', 'vision_model.encoder.layers.9.layer_norm2.weight', 'vision_model.encoder.layers.18.layer_norm1.bias', 'vision_model.encoder.layers.3.self_attn.v_proj.weight', 'vision_model.encoder.layers.1.self_attn.out_proj.bias', 'vision_model.encoder.layers.23.self_attn.v_proj.bias', 'vision_model.encoder.layers.2.mlp.fc2.bias', 'vision_model.encoder.layers.4.mlp.fc1.bias', 'vision_model.encoder.layers.6.mlp.fc1.bias', 'vision_model.encoder.layers.17.mlp.fc1.bias', 'vision_model.encoder.layers.13.self_attn.q_proj.weight', 'vision_model.encoder.layers.23.self_attn.out_proj.bias', 'vision_model.encoder.layers.23.self_attn.q_proj.bias', 'vision_model.encoder.layers.14.mlp.fc2.weight', 'vision_model.encoder.layers.19.layer_norm1.weight', 'vision_model.encoder.layers.11.self_attn.q_proj.weight', 'vision_model.encoder.layers.15.layer_norm1.weight', 'vision_model.encoder.layers.18.layer_norm2.weight', 'vision_model.encoder.layers.10.self_attn.v_proj.weight', 'vision_model.encoder.layers.20.layer_norm2.weight', 'vision_model.encoder.layers.11.self_attn.k_proj.weight', 'vision_model.encoder.layers.12.self_attn.q_proj.weight', 'vision_model.encoder.layers.5.self_attn.out_proj.weight', 'vision_model.embeddings.position_ids', 'vision_model.encoder.layers.4.self_attn.q_proj.bias', 'vision_model.encoder.layers.5.self_attn.q_proj.weight', 'vision_model.encoder.layers.21.self_attn.out_proj.bias', 'vision_model.encoder.layers.4.self_attn.k_proj.weight', 'vision_model.encoder.layers.6.self_attn.k_proj.bias', 'vision_model.encoder.layers.6.layer_norm1.weight', 'vision_model.encoder.layers.7.self_attn.k_proj.weight', 'vision_model.encoder.layers.11.layer_norm1.weight', 'vision_model.encoder.layers.3.self_attn.q_proj.weight', 'text_projection.weight', 'vision_model.encoder.layers.22.self_attn.k_proj.bias', 'vision_model.encoder.layers.23.self_attn.v_proj.weight', 'vision_model.encoder.layers.6.self_attn.out_proj.weight', 'vision_model.encoder.layers.8.layer_norm2.bias', 'vision_model.encoder.layers.9.mlp.fc1.bias', 'vision_model.encoder.layers.5.mlp.fc1.bias', 'vision_model.encoder.layers.19.self_attn.v_proj.weight', 'vision_model.encoder.layers.23.mlp.fc1.bias', 'vision_model.encoder.layers.16.self_attn.out_proj.weight', 'vision_model.encoder.layers.7.mlp.fc2.bias', 'vision_model.encoder.layers.7.self_attn.out_proj.weight', 'vision_model.encoder.layers.13.mlp.fc2.bias', 'vision_model.encoder.layers.16.self_attn.v_proj.weight', 'vision_model.encoder.layers.8.self_attn.out_proj.weight', 'vision_model.encoder.layers.4.self_attn.v_proj.weight', 'vision_model.encoder.layers.15.mlp.fc2.weight', 'vision_model.encoder.layers.15.self_attn.q_proj.weight', 'vision_model.encoder.layers.15.mlp.fc2.bias', 'vision_model.encoder.layers.10.self_attn.out_proj.bias', 'vision_model.encoder.layers.5.self_attn.k_proj.bias', 'vision_model.encoder.layers.5.layer_norm2.bias', 'vision_model.encoder.layers.1.self_attn.k_proj.weight', 'vision_model.encoder.layers.12.self_attn.v_proj.weight', 'vision_model.encoder.layers.2.layer_norm2.bias', 'vision_model.encoder.layers.6.layer_norm2.bias', 'vision_model.encoder.layers.17.self_attn.k_proj.weight', 'vision_model.encoder.layers.9.mlp.fc1.weight', 'vision_model.encoder.layers.11.layer_norm1.bias', 'vision_model.encoder.layers.14.self_attn.k_proj.bias', 'vision_model.encoder.layers.7.layer_norm2.bias', 'vision_model.encoder.layers.12.mlp.fc1.weight', 'vision_model.encoder.layers.19.self_attn.k_proj.bias', 'vision_model.encoder.layers.15.self_attn.k_proj.bias', 'vision_model.encoder.layers.17.self_attn.k_proj.bias', 'vision_model.encoder.layers.1.mlp.fc2.weight', 'vision_model.encoder.layers.2.self_attn.v_proj.bias', 'vision_model.encoder.layers.10.self_attn.v_proj.bias', 'vision_model.encoder.layers.8.mlp.fc1.bias', 'vision_model.encoder.layers.20.self_attn.v_proj.bias', 'vision_model.encoder.layers.13.self_attn.v_proj.bias', 'vision_model.encoder.layers.11.mlp.fc1.weight', 'vision_model.encoder.layers.4.self_attn.k_proj.bias', 'vision_model.encoder.layers.14.self_attn.out_proj.weight', 'vision_model.encoder.layers.11.self_attn.out_proj.bias', 'vision_model.encoder.layers.3.layer_norm2.bias', 'vision_model.encoder.layers.1.mlp.fc1.bias', 'vision_model.encoder.layers.22.mlp.fc1.weight', 'vision_model.encoder.layers.12.mlp.fc2.bias', 'vision_model.encoder.layers.11.layer_norm2.bias', 'vision_model.encoder.layers.2.mlp.fc1.weight', 'logit_scale', 'vision_model.encoder.layers.20.self_attn.q_proj.bias', 'vision_model.encoder.layers.22.mlp.fc2.bias', 'vision_model.encoder.layers.13.layer_norm1.bias', 'vision_model.encoder.layers.7.layer_norm1.weight', 'vision_model.encoder.layers.22.mlp.fc2.weight', 'vision_model.encoder.layers.18.mlp.fc1.bias', 'vision_model.encoder.layers.17.self_attn.v_proj.weight', 'vision_model.encoder.layers.1.self_attn.q_proj.weight', 'vision_model.encoder.layers.19.self_attn.k_proj.weight', 'vision_model.encoder.layers.3.layer_norm1.bias', 'vision_model.encoder.layers.10.self_attn.k_proj.bias', 'vision_model.encoder.layers.16.self_attn.v_proj.bias', 'vision_model.encoder.layers.23.layer_norm1.weight', 'vision_model.encoder.layers.3.self_attn.out_proj.weight', 'vision_model.encoder.layers.3.layer_norm1.weight', 'vision_model.encoder.layers.12.self_attn.v_proj.bias', 'vision_model.encoder.layers.17.self_attn.v_proj.bias', 'vision_model.encoder.layers.17.layer_norm2.bias', 'vision_model.encoder.layers.19.layer_norm1.bias', 'vision_model.encoder.layers.23.self_attn.k_proj.bias', 'vision_model.encoder.layers.18.layer_norm2.bias', 'vision_model.encoder.layers.19.mlp.fc2.bias', 'vision_model.encoder.layers.18.self_attn.out_proj.bias', 'vision_model.encoder.layers.17.layer_norm1.bias', 'vision_model.encoder.layers.7.self_attn.k_proj.bias', 'vision_model.encoder.layers.2.self_attn.k_proj.weight', 'vision_model.encoder.layers.5.layer_norm2.weight', 'vision_model.encoder.layers.2.layer_norm2.weight', 'vision_model.encoder.layers.1.self_attn.v_proj.bias', 'vision_model.encoder.layers.14.mlp.fc1.weight', 'vision_model.encoder.layers.19.self_attn.q_proj.weight', 'vision_model.encoder.layers.9.self_attn.k_proj.weight', 'vision_model.encoder.layers.10.self_attn.q_proj.weight', 'vision_model.encoder.layers.19.self_attn.v_proj.bias', 'vision_model.encoder.layers.3.layer_norm2.weight', 'vision_model.encoder.layers.10.layer_norm1.weight', 'vision_model.encoder.layers.13.layer_norm1.weight', 'vision_model.encoder.layers.10.mlp.fc2.bias', 'vision_model.encoder.layers.9.layer_norm1.bias', 'vision_model.encoder.layers.14.self_attn.k_proj.weight', 'vision_model.encoder.layers.9.self_attn.v_proj.weight', 'vision_model.encoder.layers.11.mlp.fc2.bias', 'vision_model.encoder.layers.4.mlp.fc2.bias', 'vision_model.encoder.layers.3.mlp.fc2.weight', 'vision_model.encoder.layers.22.self_attn.k_proj.weight', 'vision_model.encoder.layers.22.layer_norm2.weight', 'vision_model.embeddings.class_embedding', 'vision_model.encoder.layers.6.mlp.fc2.weight', 'vision_model.encoder.layers.13.mlp.fc1.weight', 'vision_model.encoder.layers.8.self_attn.q_proj.bias', 'vision_model.encoder.layers.12.self_attn.k_proj.weight', 'vision_model.encoder.layers.8.self_attn.out_proj.bias', 'vision_model.encoder.layers.4.self_attn.v_proj.bias', 'vision_model.encoder.layers.3.mlp.fc1.weight', 'vision_model.encoder.layers.19.layer_norm2.bias', 'vision_model.encoder.layers.0.mlp.fc2.weight', 'vision_model.encoder.layers.0.self_attn.v_proj.bias', 'vision_model.encoder.layers.7.layer_norm2.weight', 'vision_model.encoder.layers.4.self_attn.q_proj.weight', 'vision_model.encoder.layers.22.self_attn.v_proj.bias', 'vision_model.encoder.layers.10.layer_norm2.bias', 'vision_model.encoder.layers.0.self_attn.k_proj.bias', 'vision_model.encoder.layers.10.layer_norm1.bias', 'vision_model.encoder.layers.22.self_attn.q_proj.weight', 'vision_model.encoder.layers.21.mlp.fc1.weight', 'vision_model.encoder.layers.16.self_attn.k_proj.weight', 'vision_model.encoder.layers.15.self_attn.k_proj.weight', 'vision_model.encoder.layers.9.self_attn.k_proj.bias', 'vision_model.encoder.layers.16.layer_norm2.bias', 'vision_model.encoder.layers.21.self_attn.q_proj.weight', 'vision_model.encoder.layers.1.self_attn.q_proj.bias', 'vision_model.encoder.layers.5.self_attn.v_proj.bias', 'vision_model.encoder.layers.11.self_attn.q_proj.bias', 'vision_model.encoder.layers.4.layer_norm2.weight', 'vision_model.encoder.layers.6.self_attn.v_proj.weight', 'vision_model.encoder.layers.12.mlp.fc1.bias', 'vision_model.encoder.layers.14.mlp.fc2.bias', 'vision_model.encoder.layers.12.layer_norm1.weight', 'vision_model.encoder.layers.15.mlp.fc1.bias', 'vision_model.encoder.layers.17.layer_norm1.weight', 'vision_model.post_layernorm.weight', 'vision_model.encoder.layers.17.self_attn.out_proj.weight', 'vision_model.encoder.layers.11.self_attn.v_proj.weight', 'vision_model.encoder.layers.15.mlp.fc1.weight', 'vision_model.encoder.layers.2.self_attn.v_proj.weight', 'vision_model.encoder.layers.1.layer_norm2.weight', 'vision_model.encoder.layers.14.layer_norm1.bias', 'vision_model.encoder.layers.16.self_attn.q_proj.bias', 'vision_model.encoder.layers.13.layer_norm2.weight', 'vision_model.encoder.layers.17.self_attn.q_proj.bias', 'vision_model.encoder.layers.14.self_attn.v_proj.bias', 'vision_model.encoder.layers.9.mlp.fc2.weight', 'vision_model.encoder.layers.18.mlp.fc2.bias', 'vision_model.encoder.layers.20.layer_norm1.weight', 'vision_model.encoder.layers.22.mlp.fc1.bias', 'vision_model.encoder.layers.18.self_attn.v_proj.bias', 'vision_model.encoder.layers.16.mlp.fc2.bias', 'vision_model.encoder.layers.16.self_attn.k_proj.bias', 'vision_model.encoder.layers.13.self_attn.v_proj.weight', 'vision_model.encoder.layers.6.mlp.fc1.weight', 'vision_model.encoder.layers.1.mlp.fc1.weight', 'vision_model.encoder.layers.22.self_attn.out_proj.weight', 'vision_model.encoder.layers.15.layer_norm1.bias', 'vision_model.encoder.layers.17.mlp.fc2.weight', 'vision_model.encoder.layers.13.self_attn.k_proj.bias', 'vision_model.encoder.layers.17.mlp.fc2.bias', 'vision_model.encoder.layers.19.self_attn.out_proj.bias', 'vision_model.encoder.layers.21.mlp.fc2.weight', 'vision_model.encoder.layers.6.self_attn.out_proj.bias', 'vision_model.encoder.layers.17.self_attn.q_proj.weight', 'vision_model.encoder.layers.18.self_attn.k_proj.weight', 'vision_model.encoder.layers.13.self_attn.out_proj.weight', 'vision_model.encoder.layers.5.self_attn.q_proj.bias', 'vision_model.encoder.layers.6.mlp.fc2.bias', 'vision_model.encoder.layers.20.self_attn.q_proj.weight', 'vision_model.encoder.layers.5.layer_norm1.weight', 'vision_model.encoder.layers.6.layer_norm1.bias', 'vision_model.encoder.layers.8.self_attn.k_proj.weight', 'vision_model.encoder.layers.0.mlp.fc2.bias', 'vision_model.encoder.layers.23.mlp.fc2.weight', 'vision_model.encoder.layers.9.layer_norm2.bias', 'vision_model.encoder.layers.12.layer_norm2.weight', 'vision_model.encoder.layers.10.mlp.fc1.weight', 'vision_model.encoder.layers.20.mlp.fc2.weight', 'vision_model.encoder.layers.0.self_attn.q_proj.weight', 'vision_model.encoder.layers.0.layer_norm2.weight', 'vision_model.encoder.layers.2.self_attn.q_proj.weight', 'vision_model.encoder.layers.13.mlp.fc1.bias', 'vision_model.encoder.layers.6.layer_norm2.weight', 'vision_model.encoder.layers.22.layer_norm1.weight', 'vision_model.encoder.layers.4.layer_norm2.bias', 'vision_model.encoder.layers.22.self_attn.out_proj.bias', 'vision_model.encoder.layers.5.self_attn.v_proj.weight', 'vision_model.encoder.layers.2.mlp.fc2.weight', 'vision_model.encoder.layers.9.self_attn.out_proj.bias', 'vision_model.encoder.layers.7.self_attn.v_proj.bias', 'vision_model.encoder.layers.16.layer_norm1.weight', 'vision_model.encoder.layers.16.mlp.fc2.weight', 'vision_model.encoder.layers.23.layer_norm1.bias', 'vision_model.encoder.layers.20.mlp.fc2.bias', 'vision_model.encoder.layers.4.mlp.fc2.weight', 'vision_model.encoder.layers.16.layer_norm2.weight', 'vision_model.encoder.layers.18.self_attn.out_proj.weight', 'vision_model.encoder.layers.23.mlp.fc1.weight', 'vision_model.encoder.layers.5.self_attn.out_proj.bias', 'vision_model.encoder.layers.14.self_attn.q_proj.weight', 'vision_model.encoder.layers.0.layer_norm1.weight', 'vision_model.pre_layrnorm.bias', 'vision_model.encoder.layers.18.mlp.fc1.weight', 'vision_model.encoder.layers.22.self_attn.v_proj.weight', 'vision_model.encoder.layers.6.self_attn.v_proj.bias', 'vision_model.encoder.layers.10.mlp.fc2.weight', 'vision_model.encoder.layers.8.mlp.fc1.weight', 'vision_model.encoder.layers.20.layer_norm2.bias', 'vision_model.encoder.layers.14.layer_norm1.weight', 'vision_model.encoder.layers.20.self_attn.v_proj.weight', 'vision_model.encoder.layers.0.self_attn.k_proj.weight', 'vision_model.encoder.layers.22.layer_norm1.bias', 'vision_model.encoder.layers.0.mlp.fc1.bias', 'vision_model.encoder.layers.9.self_attn.v_proj.bias', 'vision_model.encoder.layers.21.self_attn.v_proj.bias', 'vision_model.encoder.layers.14.layer_norm2.bias', 'vision_model.encoder.layers.23.self_attn.out_proj.weight', 'vision_model.encoder.layers.8.layer_norm2.weight', 'vision_model.encoder.layers.5.mlp.fc2.weight', 'vision_model.encoder.layers.7.self_attn.q_proj.bias', 'vision_model.encoder.layers.20.self_attn.k_proj.weight', 'vision_model.encoder.layers.23.layer_norm2.bias', 'vision_model.encoder.layers.21.self_attn.k_proj.bias', 'vision_model.encoder.layers.7.layer_norm1.bias', 'vision_model.encoder.layers.12.self_attn.k_proj.bias', 'vision_model.encoder.layers.22.layer_norm2.bias', 'vision_model.encoder.layers.4.mlp.fc1.weight', 'vision_model.encoder.layers.16.self_attn.out_proj.bias', 'vision_model.encoder.layers.4.layer_norm1.bias', 'vision_model.encoder.layers.20.layer_norm1.bias', 'vision_model.encoder.layers.21.layer_norm1.weight', 'vision_model.encoder.layers.2.self_attn.out_proj.weight', 'vision_model.encoder.layers.8.mlp.fc2.bias', 'vision_model.encoder.layers.20.self_attn.k_proj.bias', 'vision_model.encoder.layers.8.self_attn.k_proj.bias', 'vision_model.encoder.layers.18.mlp.fc2.weight', 'vision_model.encoder.layers.1.mlp.fc2.bias', 'vision_model.encoder.layers.19.layer_norm2.weight']
- This IS expected if you are initializing CLIPTextModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing CLIPTextModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).

2023-06-30 12:47:59,439 [enfugue] INFO (process.py:292) Reached maximum idle time after 15.0 seconds, exiting engine process
^C
painebenjamin commented 1 year ago

Hi Lennart!

It looks like Enfugue wasn't able to communicate with your GPU, resulting in it running entirely on the CPU. This is so slow as to almost be non-functional, so that explains the timeout.

2023-06-30 12:46:44,944 [enfugue] DEBUG (manager.py:711) Inferencing on CPU, using BFloat

Now of course I need to figure out why. What kind of GPU do you have?

lennartbrandin commented 1 year ago

AMD RX 5700

a1111-sd-webui requires specific settings for it to be able to run: CLI args --precision full --no-half Python < 3.11

painebenjamin commented 1 year ago

Excellent. Thank you for the information - it's definitely AMD that's not working for us.

Unfortunately for everyone involved except Nvidia, CUDA is the de-facto standard for deep learning on GPUs. Non-Nvidia GPUs are limited to other APIs, most notably ROCm. The precompiled bundles I provide have CUDA binaries in them, but those will do nothing for you if you don't have a GPU that can speak to it.

With that being said, you're not the first AMD user here, and I definitely don't want to leave you guys high and dry. I'm making an issue out of this conversation for ROCm support. When it's working, AMD users will likely have a separate download than Nvidia users, so as to avoid having to download gigabytes of files that won't actually do anything.

Now to find an AMD GPU to test on...

painebenjamin commented 1 year ago

This may be fixed. I say may because I only have access to Radeon Pro V520's, which are technically unsupported, and I haven't been able to find a combination of drivers and ROCm versions that will work for that device. However, checks are in place to determine if the device is using ROCm and will appropriately disable half-precision - but you or anyone else using an AMD device will need to configure ROCm and PATH yourselves for Enfugue to pick up on it (though you are probably already used to having to jump through hoops to get SD working.)

csdougliss commented 7 months ago

@painebenjamin Any updates on AMD support? I have a AMD Radeon RX 6800S and I can't even generate the cat demo image as it's taking so long

2024-03-03 15:45:08,777 [enfugue] DEBUG (pipeline.py:2238) Denoising image in 20 steps on cpu (unchunked)

Running the v0.3.3 version on Ubuntu 24.04 (3/3/2024). However I am new to this tool so I might be doing it wrong.. I assume I need to install ROCm first?