Open 5shekel opened 1 year ago
@5shekel hi, it seems that the new version of diffusers has do some changes.
you can use the new code, we refactor the code to handle this.
@5shekel hi, it seems that the new version of diffusers has do some changes.
you can use the new code, we refactor the code to handle this.
Which version are you referring to? I have installed from source ("0.22.0.dev0" version) and dir(diffusers.utils) only lists the following. Could you please advise?!
['BACKENDS_MAPPING', 'BaseOutput', 'CONFIG_NAME', 'DEPRECATED_REVISION_ARGS', 'DIFFUSERS_CACHE', 'DIFFUSERS_DYNAMIC_MODULE_NAME', 'DummyObject', 'ENV_VARS_TRUE_AND_AUTO_VALUES', 'ENV_VARS_TRUE_VALUES', 'FLAX_WEIGHTS_NAME', 'HF_HUB_OFFLINE', 'HF_MODULES_CACHE', 'HUGGINGFACE_CO_RESOLVE_ENDPOINT', 'ONNX_EXTERNAL_WEIGHTS_NAME', 'ONNX_WEIGHTS_NAME', 'OptionalDependencyNotAvailable', 'PIL_INTERPOLATION', 'PushToHubMixin', 'SAFETENSORS_WEIGHTS_NAME', 'USE_JAX', 'USE_TF', 'USE_TORCH', 'WEIGHTS_NAME', '_LazyModule', 'builtins', 'cached', 'doc', 'file', 'loader', 'name', 'package', 'path', 'spec', 'version', '_add_variant', '_get_model_file', 'check_min_version', 'constants', 'convert_state_dict_to_diffusers', 'convert_state_dict_to_peft', 'deprecate', 'deprecation_utils', 'doc_utils', 'dummy_flax_and_transformers_objects', 'dummy_flax_objects', 'dummy_note_seq_objects', 'dummy_onnx_objects', 'dummy_torch_and_librosa_objects', 'dummy_torch_and_torchsde_objects', 'dummy_torch_and_transformers_and_k_diffusion_objects', 'dummy_torch_and_transformers_and_onnx_objects', 'dummy_transformers_and_torch_and_note_seq_objects', 'dynamic_modules_utils', 'export_to_gif', 'export_to_obj', 'export_to_ply', 'export_to_video', 'export_utils', 'extract_commit_hash', 'get_class_from_dynamic_module', 'get_logger', 'get_objects_from_module', 'http_user_agent', 'hub_utils', 'import_utils', 'is_accelerate_available', 'is_accelerate_version', 'is_bs4_available', 'is_flax_available', 'is_ftfy_available', 'is_inflect_available', 'is_invisible_watermark_available', 'is_k_diffusion_available', 'is_k_diffusion_version', 'is_librosa_available', 'is_note_seq_available', 'is_omegaconf_available', 'is_onnx_available', 'is_peft_available', 'is_scipy_available', 'is_tensorboard_available', 'is_torch_available', 'is_torch_version', 'is_torchsde_available', 'is_transformers_available', 'is_transformers_version', 'is_unidecode_available', 'is_wandb_available', 'is_xformers_available', 'load_image', 'loading_utils', 'logger', 'logging', 'make_image_grid', 'numpy_to_pil', 'os', 'outputs', 'peft_utils', 'pil_utils', 'pt_to_pil', 'recurse_remove_peft_layers', 'replace_example_docstring', 'requires_backends', 'state_dict_utils', 'version']
hello! Do you have any news? still doesn't want to import is_compiled_module from anywhere...((( python 3.11.3
ImportError: cannot import name 'is_compiled_module' from 'diffusers.utils' (/Users/.../sd/MetalDiffusion/venv/lib/python3.11/site-packages/diffusers/utils/init.py)
may be I can download that py file, but from where?)
@5shekel @TheLearner23 hi, you can clone the new repo of https://github.com/tencent-ailab/IP-Adapter
here is how i installed dependencies.
using python virtual env
tested on windows
python -m venv .venv
/.venv/scripts/activate #WINDOWS
#/.venv/bin/activate #LINUX
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118
pip install diffusers transformers
pip install ipykernel #for jupyter notebook
pip install accelerate #for
looks good!
Am I able to run it in macOS terminal? I guess yes.
$ python3 -m venv venv $ source venv/bin/activate
Without dollar sign
You don't Have cuda if you don't have Nvidia card. Maybe this can work in CPU. Idk
There's AMD gpu on board.. But how to overcome it and run using CPU?
update with your mac model so people can assist. i cant, sorry there is attempt[1] to run on mac new models, but idk about your situation. probebly better to open new issue. this is different [1] https://github.com/tencent-ailab/IP-Adapter/issues/53
@Sivll try adding this [untested] check to the model loading cell
device = "cuda" if torch.cuda.is_available() else "cpu"
instead of
device = "cude"
To fix this, you can open the file located at /stable-diffusion-webui/extensions/sd-webui-controlnet/annotator/clipvision/init.py using a text editor or IDE (such as VSCode or PyCharm).
1、Navigate to line 81 and locate the line: clip_vision_h_uc = torch.load(clip_vision_h_uc)['uc']. 2、Modify this line to: clip_vision_h_uc = torch.load(clip_vision_h_uc, map_location=torch.device('cpu'))['uc']. 3、Save your changes and exit the editor. 4、Run your program again. This should prevent any CUDA-related errors.
is to add the command-line flag "--no-half" to your launch settings.
after
pip install diffusers[torch] #or pip install diffusers
im getting
>>> from diffusers.utils import is_compiled_module Traceback (most recent call last): File "<stdin>", line 1, in <module> ImportError: cannot import name 'is_compiled_module' from 'diffusers.utils' (C:\tools\miniconda3\envs\test\lib\site-packages\diffusers\utils\__init__.py)
simple
print(dir(diffusers.utils))
cant find it.>>> import diffusers.utils >>> print(dir(diffusers.utils)) ['BACKENDS_MAPPING', 'BaseOutput', 'CONFIG_NAME', 'DEPRECATED_REVISION_ARGS', 'DIFFUSERS_CACHE', 'DIFFUSERS_DYNAMIC_MODULE_NAME', 'DummyObject', 'ENV_VARS_TRUE_AND_AUTO_VALUES', 'ENV_VARS_TRUE_VALUES', 'FLAX_WEIGHTS_NAME', 'HF_HUB_OFFLINE', 'HF_MODULES_CACHE', 'HUGGINGFACE_CO_RESOLVE_ENDPOINT', 'ONNX_EXTERNAL_WEIGHTS_NAME', 'ONNX_WEIGHTS_NAME', 'OptionalDependencyNotAvailable', 'PIL_INTERPOLATION', 'PushToHubMixin', 'SAFETENSORS_WEIGHTS_NAME', 'USE_JAX', 'USE_TF', 'USE_TORCH', 'WEIGHTS_NAME', '_LazyModule', '__builtins__', '__cached__', '__doc__', '__file__', '__loader__', '__name__', '__package__', '__path__', '__spec__', '__version__', '_add_variant', '_get_model_file', 'check_min_version', 'constants', 'deprecate', 'deprecation_utils', 'doc_utils', 'dummy_flax_and_transformers_objects', 'dummy_flax_objects', 'dummy_note_seq_objects', 'dummy_onnx_objects', 'dummy_torch_and_librosa_objects', 'dummy_torch_and_scipy_objects', 'dummy_torch_and_torchsde_objects', 'dummy_torch_and_transformers_and_k_diffusion_objects', 'dummy_torch_and_transformers_and_onnx_objects', 'dummy_torch_and_transformers_objects', 'dummy_transformers_and_torch_and_note_seq_objects', 'dynamic_modules_utils', 'export_to_gif', 'export_to_obj', 'export_to_ply', 'export_to_video', 'export_utils', 'extract_commit_hash', 'get_class_from_dynamic_module', 'get_logger', 'get_objects_from_module', 'http_user_agent', 'hub_utils', 'import_utils', 'is_accelerate_available', 'is_accelerate_version', 'is_bs4_available', 'is_flax_available', 'is_ftfy_available', 'is_inflect_available', 'is_invisible_watermark_available', 'is_k_diffusion_available', 'is_k_diffusion_version', 'is_librosa_available', 'is_note_seq_available', 'is_omegaconf_available', 'is_onnx_available', 'is_scipy_available', 'is_tensorboard_available', 'is_torch_available', 'is_torch_version', 'is_torchsde_available', 'is_transformers_available', 'is_transformers_version', 'is_unidecode_available', 'is_wandb_available', 'is_xformers_available', 'load_image', 'loading_utils', 'logger', 'logging', 'make_image_grid', 'numpy_to_pil', 'os', 'outputs', 'pil_utils', 'pt_to_pil', 'replace_example_docstring', 'requires_backends', 'version']
i see others using
from diffusers.utils.torch_utils import is_compiled_module
I fixed it by using: from diffusers.utils.torch_utils import is_compiled_module Thanks!
https://github.com/tencent-ailab/IP-Adapter/blob/22da45667898fd237ab54d3681db53a9ae98bf1e/ip_adapter/utils.py#L9
after
im getting
simple
print(dir(diffusers.utils))
cant find it.i see others using