comfyanonymous / ComfyUI

The most powerful and modular diffusion model GUI, api and backend with a graph/nodes interface.
https://www.comfy.org/
GNU General Public License v3.0
50.99k stars 5.35k forks source link

Error occurred at Ksampler while running comfy with --force-fp16 command on Mac M1 #2217

Open sushruthad opened 9 months ago

sushruthad commented 9 months ago

Hi, For me, Comfy is generating images when it is ran using the command python main.py . But when I ran it with python main.py --force-fp16 , it generates error at Ksampler saying "upsample_nearest2d_channels_last" not implemented for 'Half' .

I had similar issue with a1111, where by editing --no-half in Commandline args, it got solved. But with comfy, I am unable to find the arg line command or user.sh file. I don't know coding and new to comfyui. I have installed anaconda and PyTorch nightly using pip3 command and followed every damn step. Everything is up to date as well.

Help me! Sushrutha D

Here's what happens in the terminal-

When I run python main.py

Last login: Thu Dec 7 21:17:27 on ttys000 (base) sushruthad@Sushruthas-MacBook-Air ~ % cd /Users/sushruthad/Stable-Diffusion/ComfyUI (base) sushruthad@Sushruthas-MacBook-Air ComfyUI % python main.py ** ComfyUI start up time: 2023-12-07 21:20:43.482025

Prestartup times for custom nodes: 0.0 seconds: /Users/sushruthad/Stable-Diffusion/ComfyUI/custom_nodes/ComfyUI-Manager

Total VRAM 16384 MB, total RAM 16384 MB /Users/sushruthad/anaconda3/lib/python3.11/site-packages/transformers/utils/generic.py:260: UserWarning: torch.utils._pytree._register_pytree_node is deprecated. Please use torch.utils._pytree.register_pytree_node instead. torch.utils._pytree._register_pytree_node( Set vram state to: SHARED Device: mps VAE dtype: torch.float32 Using sub quadratic optimization for cross attention, if you have memory or speed issues try using: --use-split-cross-attention Adding extra search path checkpoints /Users/sushruthad/Stable-Diffusion/stable-diffusion-webui/models/Stable-diffusion Adding extra search path configs /Users/sushruthad/Stable-Diffusion/stable-diffusion-webui/models/Stable-diffusion Adding extra search path vae /Users/sushruthad/Stable-Diffusion/stable-diffusion-webui/models/VAE Adding extra search path loras /Users/sushruthad/Stable-Diffusion/stable-diffusion-webui/models/Lora Adding extra search path loras /Users/sushruthad/Stable-Diffusion/stable-diffusion-webui/models/LyCORIS Adding extra search path upscale_models /Users/sushruthad/Stable-Diffusion/stable-diffusion-webui/models/ESRGAN Adding extra search path upscale_models /Users/sushruthad/Stable-Diffusion/stable-diffusion-webui/models/RealESRGAN Adding extra search path upscale_models /Users/sushruthad/Stable-Diffusion/stable-diffusion-webui/models/SwinIR Adding extra search path embeddings /Users/sushruthad/Stable-Diffusion/stable-diffusion-webui/embeddings Adding extra search path hypernetworks /Users/sushruthad/Stable-Diffusion/stable-diffusion-webui/models/hypernetworks Adding extra search path controlnet /Users/sushruthad/Stable-Diffusion/stable-diffusion-webui/models/ControlNet

Loading: ComfyUI-Manager (V1.6.4)

ComfyUI Revision: 1789 [fbdb14d4] | Released on '2023-12-06'

Import times for custom nodes: 0.0 seconds: /Users/sushruthad/Stable-Diffusion/ComfyUI/custom_nodes/ComfyUI_NestedNodeBuilder 0.0 seconds: /Users/sushruthad/Stable-Diffusion/ComfyUI/custom_nodes/efficiency-nodes-comfyui 0.1 seconds: /Users/sushruthad/Stable-Diffusion/ComfyUI/custom_nodes/ComfyUI-Manager

Starting server

To see the GUI go to: http://127.0.0.1:8188 got prompt model_type EPS adm 2816 Using split attention in VAE Working with z of shape (1, 4, 32, 32) = 4096 dimensions. Using split attention in VAE missing {'cond_stage_model.clip_l.text_projection', 'cond_stage_model.clip_l.logit_scale'} left over keys: dict_keys(['cond_stage_model.clip_l.transformer.text_model.embeddings.position_ids']) Requested to load SDXLClipModel Loading 1 new model Requested to load SDXL Loading 1 new model 0%| | 0/20 [00:00<?, ?it/s]/Users/sushruthad/anaconda3/lib/python3.11/site-packages/torch/nn/functional.py:4001: UserWarning: MPS: 'nearest' mode upsampling is supported natively starting from macOS 13.0. Falling back on CPU. This may have performance implications. (Triggered internally at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/native/mps/operations/UpSample.mm:255.) return torch._C._nn.upsample_nearest2d(input, output_size, scale_factors) 100%|███████████████████████████████████████████| 20/20 [01:22<00:00, 4.13s/it] Requested to load AutoencoderKL Loading 1 new model /Users/sushruthad/anaconda3/lib/python3.11/site-packages/torch/nn/functional.py:4001: UserWarning: MPS: passing scale factor to upsample ops is supported natively starting from macOS 13.0. Falling back on CPU. This may have performance implications. (Triggered internally at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/native/mps/operations/UpSample.mm:246.) return torch._C._nn.upsample_nearest2d(input, output_size, scale_factors) Prompt executed in 164.91 seconds


When I run python main.py --force-fp16

Last login: Thu Dec 7 21:54:48 on ttys000 (base) sushruthad@Sushruthas-MacBook-Air ~ % cd /Users/sushruthad/Stable-Diffusion/ComfyUI (base) sushruthad@Sushruthas-MacBook-Air ComfyUI % python main.py --force-fp16
** ComfyUI start up time: 2023-12-07 21:55:35.051458

Prestartup times for custom nodes: 0.0 seconds: /Users/sushruthad/Stable-Diffusion/ComfyUI/custom_nodes/ComfyUI-Manager

Total VRAM 16384 MB, total RAM 16384 MB Forcing FP16. /Users/sushruthad/anaconda3/lib/python3.11/site-packages/transformers/utils/generic.py:260: UserWarning: torch.utils._pytree._register_pytree_node is deprecated. Please use torch.utils._pytree.register_pytree_node instead. torch.utils._pytree._register_pytree_node( Set vram state to: SHARED Device: mps VAE dtype: torch.float32 Using sub quadratic optimization for cross attention, if you have memory or speed issues try using: --use-split-cross-attention Adding extra search path checkpoints /Users/sushruthad/Stable-Diffusion/stable-diffusion-webui/models/Stable-diffusion Adding extra search path configs /Users/sushruthad/Stable-Diffusion/stable-diffusion-webui/models/Stable-diffusion Adding extra search path vae /Users/sushruthad/Stable-Diffusion/stable-diffusion-webui/models/VAE Adding extra search path loras /Users/sushruthad/Stable-Diffusion/stable-diffusion-webui/models/Lora Adding extra search path loras /Users/sushruthad/Stable-Diffusion/stable-diffusion-webui/models/LyCORIS Adding extra search path upscale_models /Users/sushruthad/Stable-Diffusion/stable-diffusion-webui/models/ESRGAN Adding extra search path upscale_models /Users/sushruthad/Stable-Diffusion/stable-diffusion-webui/models/RealESRGAN Adding extra search path upscale_models /Users/sushruthad/Stable-Diffusion/stable-diffusion-webui/models/SwinIR Adding extra search path embeddings /Users/sushruthad/Stable-Diffusion/stable-diffusion-webui/embeddings Adding extra search path hypernetworks /Users/sushruthad/Stable-Diffusion/stable-diffusion-webui/models/hypernetworks Adding extra search path controlnet /Users/sushruthad/Stable-Diffusion/stable-diffusion-webui/models/ControlNet

Loading: ComfyUI-Manager (V1.6.4)

ComfyUI Revision: 1789 [fbdb14d4] | Released on '2023-12-06'

Import times for custom nodes: 0.0 seconds: /Users/sushruthad/Stable-Diffusion/ComfyUI/custom_nodes/ComfyUI_NestedNodeBuilder 0.0 seconds: /Users/sushruthad/Stable-Diffusion/ComfyUI/custom_nodes/efficiency-nodes-comfyui 0.1 seconds: /Users/sushruthad/Stable-Diffusion/ComfyUI/custom_nodes/ComfyUI-Manager

Starting server

To see the GUI go to: http://127.0.0.1:8188 FETCH DATA from: /Users/sushruthad/Stable-Diffusion/ComfyUI/custom_nodes/ComfyUI-Manager/extension-node-map.json got prompt model_type EPS adm 2816 Using split attention in VAE Working with z of shape (1, 4, 32, 32) = 4096 dimensions. Using split attention in VAE missing {'cond_stage_model.clip_l.text_projection', 'cond_stage_model.clip_l.logit_scale'} left over keys: dict_keys(['cond_stage_model.clip_l.transformer.text_model.embeddings.position_ids']) Requested to load SDXLClipModel Loading 1 new model Requested to load SDXL Loading 1 new model 0%| | 0/20 [00:00<?, ?it/s]/Users/sushruthad/anaconda3/lib/python3.11/site-packages/torch/nn/functional.py:4001: UserWarning: MPS: 'nearest' mode upsampling is supported natively starting from macOS 13.0. Falling back on CPU. This may have performance implications. (Triggered internally at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/native/mps/operations/UpSample.mm:255.) return torch._C._nn.upsample_nearest2d(input, output_size, scale_factors) 0%| | 0/20 [00:01<?, ?it/s] ERROR:root:!!! Exception during processing !!! ERROR:root:Traceback (most recent call last): File "/Users/sushruthad/Stable-Diffusion/ComfyUI/execution.py", line 153, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/sushruthad/Stable-Diffusion/ComfyUI/execution.py", line 83, in get_output_data return_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/sushruthad/Stable-Diffusion/ComfyUI/execution.py", line 76, in map_node_over_list results.append(getattr(obj, func)(slice_dict(input_data_all, i))) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/sushruthad/Stable-Diffusion/ComfyUI/nodes.py", line 1299, in sample return common_ksampler(model, seed, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, denoise=denoise) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/sushruthad/Stable-Diffusion/ComfyUI/nodes.py", line 1269, in common_ksampler samples = comfy.sample.sample(model, noise, steps, cfg, sampler_name, scheduler, positive, negative, latent_image, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/sushruthad/Stable-Diffusion/ComfyUI/comfy/sample.py", line 100, in sample samples = sampler.sample(noise, positive_copy, negative_copy, cfg=cfg, latent_image=latent_image, start_step=start_step, last_step=last_step, force_full_denoise=force_full_denoise, denoise_mask=noise_mask, sigmas=sigmas, callback=callback, disable_pbar=disable_pbar, seed=seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/sushruthad/Stable-Diffusion/ComfyUI/comfy/samplers.py", line 711, in sample return sample(self.model, noise, positive, negative, cfg, self.device, sampler, sigmas, self.model_options, latent_image=latent_image, denoise_mask=denoise_mask, callback=callback, disable_pbar=disable_pbar, seed=seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/sushruthad/Stable-Diffusion/ComfyUI/comfy/samplers.py", line 617, in sample samples = sampler.sample(model_wrap, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/sushruthad/Stable-Diffusion/ComfyUI/comfy/samplers.py", line 556, in sample samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, self.extra_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/sushruthad/anaconda3/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(*args, kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "/Users/sushruthad/Stable-Diffusion/ComfyUI/comfy/k_diffusion/sampling.py", line 137, in sample_euler denoised = model(x, sigma_hat * s_in, *extra_args) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/sushruthad/anaconda3/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl return self._call_impl(args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/sushruthad/anaconda3/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl return forward_call(*args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/sushruthad/Stable-Diffusion/ComfyUI/comfy/samplers.py", line 277, in forward out = self.inner_model(x, sigma, cond=cond, uncond=uncond, cond_scale=cond_scale, model_options=model_options, seed=seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/sushruthad/anaconda3/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl return self._call_impl(*args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/sushruthad/anaconda3/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl return forward_call(args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/sushruthad/Stable-Diffusion/ComfyUI/comfy/samplers.py", line 267, in forward return self.apply_model(*args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/sushruthad/Stable-Diffusion/ComfyUI/comfy/samplers.py", line 264, in apply_model out = sampling_function(self.inner_model, x, timestep, uncond, cond, cond_scale, model_options=model_options, seed=seed) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/sushruthad/Stable-Diffusion/ComfyUI/comfy/samplers.py", line 252, in sampling_function cond, uncond = calc_cond_uncond_batch(model, cond, uncond, x, timestep, model_options) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/sushruthad/Stable-Diffusion/ComfyUI/comfy/samplers.py", line 230, in calc_cond_uncond_batch output = model.apply_model(inputx, timestep, c).chunk(batch_chunks) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/sushruthad/Stable-Diffusion/ComfyUI/comfy/model_base.py", line 83, in apply_model model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, extra_conds).float() ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/sushruthad/anaconda3/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl return self._call_impl(*args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/sushruthad/anaconda3/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl return forward_call(args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/sushruthad/Stable-Diffusion/ComfyUI/comfy/ldm/modules/diffusionmodules/openaimodel.py", line 888, in forward h = forward_timestep_embed(module, h, emb, context, transformer_options, output_shape, time_context=time_context, num_video_frames=num_video_frames, image_only_indicator=image_only_indicator) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/sushruthad/Stable-Diffusion/ComfyUI/comfy/ldm/modules/diffusionmodules/openaimodel.py", line 50, in forward_timestep_embed x = layer(x, output_shape=output_shape) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/sushruthad/anaconda3/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl return self._call_impl(*args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/sushruthad/anaconda3/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl return forward_call(args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/sushruthad/Stable-Diffusion/ComfyUI/comfy/ldm/modules/diffusionmodules/openaimodel.py", line 95, in forward x = F.interpolate(x, size=shape, mode="nearest") ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/sushruthad/anaconda3/lib/python3.11/site-packages/torch/nn/functional.py", line 4001, in interpolate return torch._C._nn.upsample_nearest2d(input, output_size, scale_factors) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ RuntimeError: "upsample_nearest2d_channels_last" not implemented for 'Half'

Prompt executed in 21.06 seconds

NeedsMoar commented 9 months ago

--force-fp32

NeedsMoar commented 9 months ago

...from those speeds I can't say whether it's running anything on the GPU or not. The messages indicate not if 20 iterations of SDXL take 1:22 to execute, depending on the resolution. I'd try to force it to not try any FP16 and not do CPU fallback. --force-fp32 would be the equivalent of --no-half on automatic's side.

sushruthad commented 9 months ago

@NeedsMoar hey, thanks. --force-fp32 works :) It is still falling back on CPU though. How to disable that?!

I am just getting started on comfyui. Still in the basic setup with the default workflow...512x512 scenery in the bottle prompt :D

NeedsMoar commented 9 months ago

Ah I just realized that it's complaining about the... dear christ did Apple lock a matmul / rescaling feature to an OS version? Jobs is gonna have to find another spot to fit a 2nd pineapple for his daily eternal torment tomorrow (yeah I know he didn't do this but he created the company ethos).

I'd say do a pip3 list and post it just in case, I'll see if I can spot anything but since I don't use mac I'll be limited in what use I can be. I'm not sure what it's trying to upsample there. Is your SDXL version an FP16 version that it might be trying to scale to fp32 maybe? You could try the fp32 checkpoint. Otherwise wait for somebody with a mac to show up, or maybe look through the closed bugs. I can't imagine that nobody would have hit this before.

sushruthad commented 9 months ago

@NeedsMoar I had just downloaded the sdxl base and refiner versions from hugging face and placed it in checkpoint directory. Should I have done anything else? Also, I couldn't find anyone addressing no-half issue with comfyui and mac. Every question about that is still open. How can I check the checkpoint's version though or change?

(base) sushruthad@Sushruthas-MacBook-Air ~ % pip3 list Package Version


accelerate 0.25.0 aiobotocore 2.5.0 aiofiles 22.1.0 aiohttp 3.8.5 aioitertools 0.7.1 aiosignal 1.2.0 aiosqlite 0.18.0 alabaster 0.7.12 anaconda-anon-usage 0.4.2 anaconda-catalogs 0.2.0 anaconda-client 1.12.1 anaconda-cloud-auth 0.1.3 anaconda-navigator 2.5.0 anaconda-project 0.11.1 anyio 3.5.0 appdirs 1.4.4 applaunchservices 0.3.0 appnope 0.1.2 appscript 1.1.2 argon2-cffi 21.3.0 argon2-cffi-bindings 21.2.0 arrow 1.2.3 astroid 2.14.2 astropy 5.1 asttokens 2.0.5 async-timeout 4.0.2 atomicwrites 1.4.0 attrs 22.1.0 Automat 20.2.0 autopep8 1.6.0 Babel 2.11.0 backcall 0.2.0 backports.functools-lru-cache 1.6.4 backports.tempfile 1.0 backports.weakref 1.0.post1 bcrypt 3.2.0 beautifulsoup4 4.12.2 binaryornot 0.4.4 black 0.0 bleach 4.1.0 bokeh 3.2.1 boltons 23.0.0 botocore 1.29.76 Bottleneck 1.3.5 brotlipy 0.7.0 certifi 2023.11.17 cffi 1.15.1 chardet 4.0.0 charset-normalizer 2.0.4 click 8.0.4 clip-interrogator 0.6.0 cloudpickle 2.2.1 clyent 1.2.2 colorama 0.4.6 colorcet 3.0.1 comm 0.1.2 conda 23.7.4 conda-build 3.26.1 conda-content-trust 0.2.0 conda_index 0.3.0 conda-libmamba-solver 23.7.0 conda-pack 0.6.0 conda-package-handling 2.2.0 conda_package_streaming 0.9.0 conda-repo-cli 1.0.75 conda-token 0.4.0 conda-verify 3.4.2 constantly 15.1.0 contourpy 1.0.5 cookiecutter 1.7.3 cryptography 41.0.3 cssselect 1.1.0 cycler 0.11.0 cytoolz 0.12.0 dask 2023.6.0 datasets 2.12.0 datashader 0.15.2 datashape 0.5.4 debugpy 1.6.7 decorator 5.1.1 defusedxml 0.7.1 diff-match-patch 20200713 dill 0.3.6 distributed 2023.6.0 docstring-to-markdown 0.11 docutils 0.18.1 einops 0.7.0 entrypoints 0.4 et-xmlfile 1.1.0 executing 0.8.3 fastjsonschema 2.16.2 filelock 3.9.0 flake8 6.0.0 Flask 2.2.2 fonttools 4.25.0 frozenlist 1.3.3 fsspec 2023.4.0 ftfy 6.1.3 future 0.18.3 gensim 4.3.0 gitdb 4.0.11 GitPython 3.1.40 glob2 0.7 gmpy2 2.1.2 greenlet 2.0.1 h5py 3.9.0 HeapDict 1.0.1 holoviews 1.17.1 huggingface-hub 0.15.1 hvplot 0.8.4 hyperlink 21.0.0 idna 3.4 imagecodecs 2023.1.23 imageio 2.31.1 imagesize 1.4.1 imbalanced-learn 0.10.1 importlib-metadata 6.0.0 incremental 21.3.0 inflection 0.5.1 iniconfig 1.1.1 intake 0.6.8 intervaltree 3.1.0 ipykernel 6.25.0 ipython 8.15.0 ipython-genutils 0.2.0 ipywidgets 8.0.4 isort 5.9.3 itemadapter 0.3.0 itemloaders 1.0.4 itsdangerous 2.0.1 jaraco.classes 3.2.1 jedi 0.18.1 jellyfish 1.0.1 Jinja2 3.1.2 jinja2-time 0.2.0 jmespath 0.10.0 joblib 1.2.0 json5 0.9.6 jsonpatch 1.32 jsonpointer 2.1 jsonschema 4.17.3 jupyter 1.0.0 jupyter_client 7.4.9 jupyter-console 6.6.3 jupyter_core 5.3.0 jupyter-events 0.6.3 jupyter-server 1.23.4 jupyter_server_fileid 0.9.0 jupyter_server_ydoc 0.8.0 jupyter-ydoc 0.2.4 jupyterlab 3.6.3 jupyterlab-pygments 0.1.2 jupyterlab_server 2.22.0 jupyterlab-widgets 3.0.5 kaleido 0.2.1 keyring 23.13.1 kiwisolver 1.4.4 lazy_loader 0.2 lazy-object-proxy 1.6.0 libarchive-c 2.9 libmambapy 1.5.1 linkify-it-py 2.0.0 llvmlite 0.40.0 lmdb 1.4.1 locket 1.0.0 lxml 4.9.3 lz4 4.3.2 Markdown 3.4.1 markdown-it-py 2.2.0 MarkupSafe 2.1.1 matplotlib 3.7.2 matplotlib-inline 0.1.6 matrix-client 0.4.0 mccabe 0.7.0 mdit-py-plugins 0.3.0 mdurl 0.1.0 mistune 0.8.4 more-itertools 8.12.0 mpmath 1.3.0 msgpack 1.0.3 multidict 6.0.2 multipledispatch 0.6.0 multiprocess 0.70.14 munkres 1.1.4 mypy-extensions 1.0.0 navigator-updater 0.4.0 nbclassic 0.5.5 nbclient 0.5.13 nbconvert 6.5.4 nbformat 5.9.2 nest-asyncio 1.5.6 networkx 3.1 nltk 3.8.1 notebook 6.5.4 notebook_shim 0.2.2 numba 0.57.1 numexpr 2.8.4 numpy 1.24.3 numpydoc 1.5.0 open-clip-torch 2.23.0 openpyxl 3.0.10 packaging 23.1 pandas 2.1.1 pandocfilters 1.5.0 panel 1.2.3 param 1.13.0 parsel 1.6.0 parso 0.8.3 partd 1.4.0 pathlib 1.0.1 pathspec 0.10.3 patsy 0.5.3 pep8 1.7.1 pexpect 4.8.0 pickleshare 0.7.5 Pillow 9.4.0 pip 23.2.1 pkce 1.0.3 pkginfo 1.9.6 platformdirs 3.10.0 plotly 5.9.0 pluggy 1.0.0 ply 3.11 poyo 0.5.0 prometheus-client 0.14.1 prompt-toolkit 3.0.36 Protego 0.1.16 protobuf 4.25.1 psutil 5.9.0 ptyprocess 0.7.0 pure-eval 0.2.2 py-cpuinfo 8.0.0 pyarrow 11.0.0 pyasn1 0.4.8 pyasn1-modules 0.2.8 pycodestyle 2.10.0 pycosat 0.6.4 pycparser 2.21 pyct 0.5.0 pycurl 7.45.2 pydantic 1.10.8 PyDispatcher 2.0.5 pydocstyle 6.3.0 pyerfa 2.0.0 pyflakes 3.0.1 Pygments 2.15.1 PyJWT 2.4.0 pylint 2.16.2 pylint-venv 2.3.0 pyls-spyder 0.4.0 pyobjc-core 9.0 pyobjc-framework-Cocoa 9.0 pyobjc-framework-CoreServices 9.0 pyobjc-framework-FSEvents 9.0 pyodbc 4.0.34 pyOpenSSL 23.2.0 pyparsing 3.0.9 PyQt5-sip 12.11.0 pyrsistent 0.18.0 PySocks 1.7.1 pytest 7.4.0 python-dateutil 2.8.2 python-dotenv 0.21.0 python-json-logger 2.0.7 python-lsp-black 1.2.1 python-lsp-jsonrpc 1.0.0 python-lsp-server 1.7.2 python-slugify 5.0.2 python-snappy 0.6.1 pytoolconfig 1.2.5 pytz 2023.3.post1 pyviz-comms 2.3.0 PyWavelets 1.4.1 PyYAML 6.0 pyzmq 23.2.0 QDarkStyle 3.0.2 qstylizer 0.2.2 QtAwesome 1.2.2 qtconsole 5.4.2 QtPy 2.2.0 queuelib 1.5.0 regex 2022.7.9 requests 2.31.0 requests-file 1.5.1 requests-toolbelt 1.0.0 responses 0.13.3 rfc3339-validator 0.1.4 rfc3986-validator 0.1.1 rope 1.7.0 Rtree 1.0.1 ruamel.yaml 0.17.21 ruamel-yaml-conda 0.17.21 s3fs 2023.4.0 safetensors 0.3.2 scikit-image 0.20.0 scikit-learn 1.3.0 scipy 1.11.1 Scrapy 2.8.0 seaborn 0.12.2 Send2Trash 1.8.0 sentencepiece 0.1.99 service-identity 18.1.0 setuptools 68.0.0 simpleeval 0.9.13 sip 6.6.2 six 1.16.0 smart-open 5.2.1 smmap 5.0.1 sniffio 1.2.0 snowballstemmer 2.2.0 sortedcontainers 2.4.0 soupsieve 2.4 Sphinx 5.0.2 sphinxcontrib-applehelp 1.0.2 sphinxcontrib-devhelp 1.0.2 sphinxcontrib-htmlhelp 2.0.0 sphinxcontrib-jsmath 1.0.1 sphinxcontrib-qthelp 1.0.3 sphinxcontrib-serializinghtml 1.1.5 spyder 5.4.3 spyder-kernels 2.4.4 SQLAlchemy 1.4.39 stack-data 0.2.0 statsmodels 0.14.0 sympy 1.11.1 tables 3.8.0 tabulate 0.8.10 tblib 1.7.0 tenacity 8.2.2 terminado 0.17.1 text-unidecode 1.3 textdistance 4.2.1 threadpoolctl 2.2.0 three-merge 0.1.1 tifffile 2023.4.12 timm 0.9.12 tinycss2 1.2.1 tldextract 3.2.0 tokenizers 0.13.2 toml 0.10.2 tomlkit 0.11.1 toolz 0.12.0 torch 2.2.0.dev20231206 torchaudio 2.2.0.dev20231206 torchsde 0.2.6 torchvision 0.17.0.dev20231206 tornado 6.3.2 tqdm 4.65.0 traitlets 5.7.1 trampoline 0.1.2 transformers 4.32.1 Twisted 22.10.0 typing_extensions 4.8.0 tzdata 2023.3 uc-micro-py 1.0.1 ujson 5.4.0 Unidecode 1.2.0 urllib3 1.26.16 w3lib 1.21.0 watchdog 2.1.6 wcwidth 0.2.12 webencodings 0.5.1 websocket-client 0.58.0 Werkzeug 2.2.3 whatthepatch 1.0.2 wheel 0.38.4 widgetsnbextension 4.0.5 wrapt 1.14.1 wurlitzer 3.0.2 xarray 2023.6.0 xlwings 0.29.1 xxhash 2.0.2 xyzservices 2022.9.0 y-py 0.5.9 yapf 0.31.0 yarl 1.8.1 ypy-websocket 0.8.2 zict 2.2.0 zipp 3.11.0 zope.interface 5.4.0 zstandard 0.19.0

BuildBackBuehler commented 9 months ago

Also getting an error related to this. But I have been messing around with Miniforge/LLMs. Though my ComfyUI is in a pipenv instance so...hopefully not.

2023-12-09 18:39:28.639724: F tensorflow/c/experimental/stream_executor/stream_executor.cc:743] Non-OK-status: stream_executor::MultiPlatformManager::RegisterPlatform( std::move(cplatform)) status: INTERNAL: platform is already registered with name: "METAL"

Edit: donno if this'll help to know, any, but it is due to my .venv experiencing constant "leaks" (aka calls/links to Python instances outside of my pipenv venv). Particularly found issue with Homebrew.

IMO it really is best to not use the Python that Homebrew provides you with.

sushruthad commented 9 months ago

@NeedsMoar I just found this. Is https://github.com/pytorch/pytorch/issues/77764 talking about the same issue? is wait the only solution? @BuildBackBuehler Didn't understand bro. I didn't even create a venv. Everything got installed by itself somewhere in the anaconda3 library.