Open 5sb-5 opened 3 weeks ago
Hi, torch.fft.fftshift2 might not support float16. Im not sure why it's not a problem for me. Maybe it's due to different torch versions.
Can you try to make git_init
float32
first and cast it back to float16
after fft? For example, you add git_init = git_init.to(torch.float32)
at line 159, and git_init = git_init.to(torch.float16)
at line 192.
Let me know if you have further questions!
Traceback (most recent call last):
File "/tmp/pycharm_project_368/tree-ring-watermark-main/run_tree_ring_watermark.py", line 218, in
Hello, I encountered the above error while running run_tree_ring_watermark.py. How can I resolve this issue?
Hi, I think it's a problem with transformers version. Can you double-check if your environment has transformers == 4.23.1 diffusers == 0.11.1? Sorry about this. This code only supports old versions, but you can check out this implementation if you want to use the latest transformers or diffusers: https://huggingface.co/spaces/devingulliver/dendrokronos/blob/main/app.py.
root@autodl-container-4c99419959-9ca65ed2:~# python '/tmp/pycharm_project_368/tree-ring-watermark-main/run_tree_ring_watermark.py'
/root/miniconda3/lib/python3.8/site-packages/huggingface_hub/file_download.py:1150: FutureWarning: resume_download
is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use force_download=True
.
warnings.warn(
Traceback (most recent call last):
File "/root/miniconda3/lib/python3.8/site-packages/urllib3/connection.py", line 196, in _new_conn
sock = connection.create_connection(
File "/root/miniconda3/lib/python3.8/site-packages/urllib3/util/connection.py", line 85, in create_connection
raise err
File "/root/miniconda3/lib/python3.8/site-packages/urllib3/util/connection.py", line 73, in create_connection
sock.connect(sa)
socket.timeout: timed out
The above exception was the direct cause of the following exception:
Traceback (most recent call last): File "/root/miniconda3/lib/python3.8/site-packages/urllib3/connectionpool.py", line 789, in urlopen response = self._make_request( File "/root/miniconda3/lib/python3.8/site-packages/urllib3/connectionpool.py", line 490, in _make_request raise new_e File "/root/miniconda3/lib/python3.8/site-packages/urllib3/connectionpool.py", line 466, in _make_request self._validate_conn(conn) File "/root/miniconda3/lib/python3.8/site-packages/urllib3/connectionpool.py", line 1095, in _validate_conn conn.connect() File "/root/miniconda3/lib/python3.8/site-packages/urllib3/connection.py", line 615, in connect self.sock = sock = self._new_conn() File "/root/miniconda3/lib/python3.8/site-packages/urllib3/connection.py", line 205, in _new_conn raise ConnectTimeoutError( urllib3.exceptions.ConnectTimeoutError: (<urllib3.connection.HTTPSConnection object at 0x7f06a3d7d1c0>, 'Connection to huggingface.co timed out. (connect timeout=10)')
The above exception was the direct cause of the following exception:
Traceback (most recent call last): File "/root/miniconda3/lib/python3.8/site-packages/requests/adapters.py", line 667, in send resp = conn.urlopen( File "/root/miniconda3/lib/python3.8/site-packages/urllib3/connectionpool.py", line 843, in urlopen retries = retries.increment( File "/root/miniconda3/lib/python3.8/site-packages/urllib3/util/retry.py", line 519, in increment raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /stabilityai/stable-diffusion-2-1-base/resolve/main/scheduler/scheduler_config.json (Caused by ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0x7f06a3d7d1c0>, 'Connection to huggingface.co timed out. (connect timeout=10)'))
During handling of the above exception, another exception occurred:
Traceback (most recent call last): File "/root/miniconda3/lib/python3.8/site-packages/huggingface_hub/file_download.py", line 1751, in _get_metadata_or_catch_error metadata = get_hf_file_metadata( File "/root/miniconda3/lib/python3.8/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn return fn(args, kwargs) File "/root/miniconda3/lib/python3.8/site-packages/huggingface_hub/file_download.py", line 1673, in get_hf_file_metadata r = _request_wrapper( File "/root/miniconda3/lib/python3.8/site-packages/huggingface_hub/file_download.py", line 376, in _request_wrapper response = _request_wrapper( File "/root/miniconda3/lib/python3.8/site-packages/huggingface_hub/file_download.py", line 399, in _request_wrapper response = get_session().request(method=method, url=url, params) File "/root/miniconda3/lib/python3.8/site-packages/requests/sessions.py", line 589, in request resp = self.send(prep, send_kwargs) File "/root/miniconda3/lib/python3.8/site-packages/requests/sessions.py", line 703, in send r = adapter.send(request, kwargs) File "/root/miniconda3/lib/python3.8/site-packages/huggingface_hub/utils/_http.py", line 66, in send return super().send(request, args, **kwargs) File "/root/miniconda3/lib/python3.8/site-packages/requests/adapters.py", line 688, in send raise ConnectTimeout(e, request=request) requests.exceptions.ConnectTimeout: (MaxRetryError("HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /stabilityai/stable-diffusion-2-1-base/resolve/main/scheduler/scheduler_config.json (Caused by ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0x7f06a3d7d1c0>, 'Connection to huggingface.co timed out. (connect timeout=10)'))"), '(Request ID: a9879712-70b7-4bdf-8d5f-8b54c437825a)')
The above exception was the direct cause of the following exception:
Traceback (most recent call last): File "/root/miniconda3/lib/python3.8/site-packages/diffusers/configuration_utils.py", line 326, in load_config config_file = hf_hub_download( File "/root/miniconda3/lib/python3.8/site-packages/huggingface_hub/utils/_deprecation.py", line 101, in inner_f return f(*args, *kwargs) File "/root/miniconda3/lib/python3.8/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn return fn(args, **kwargs) File "/root/miniconda3/lib/python3.8/site-packages/huggingface_hub/file_download.py", line 1240, in hf_hub_download return _hf_hub_download_to_cache_dir( File "/root/miniconda3/lib/python3.8/site-packages/huggingface_hub/file_download.py", line 1347, in _hf_hub_download_to_cache_dir _raise_on_head_call_error(head_call_error, force_download, local_files_only) File "/root/miniconda3/lib/python3.8/site-packages/huggingface_hub/file_download.py", line 1857, in _raise_on_head_call_error raise LocalEntryNotFoundError( huggingface_hub.utils._errors.LocalEntryNotFoundError: An error happened while trying to locate the file on the Hub and we cannot find the requested files in the local cache. Please check your connection and try again or make sure your Internet connection is on.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/tmp/pycharm_project_368/tree-ring-watermark-main/run_tree_ring_watermark.py", line 218, in
Hi,I have synchronized the version of your code, but I am still getting an error。
(tree) root@autodl-container-956011b53c-0cffe4e6:/tmp/pycharm_project_808/tree-ring-watermark-main# python run_tree_ring_watermark.py --run_name rotation --w_channel 3 --w_pattern ring --r_degree 75 --start 0 --end 1000 --with_tracking
wandb: Currently logged in as: 1832235909 (1832235909-zhengzhou). Use wandb login --relogin
to force relogin
wandb: Tracking run with wandb version 0.17.7
wandb: Run data is saved locally in /tmp/pycharm_project_808/tree-ring-watermark-main/wandb/run-20240827_185009-88rmgxd2
wandb: Run wandb offline
to turn off syncing.
wandb: Syncing run rotation
wandb: ⭐️ View project at https://wandb.ai/1832235909-zhengzhou/diffusion_watermark
wandb: 🚀 View run at https://wandb.ai/1832235909-zhengzhou/diffusion_watermark/runs/88rmgxd2
Traceback (most recent call last):
File "/root/miniconda3/envs/tree/lib/python3.8/site-packages/urllib3/connection.py", line 196, in _new_conn
sock = connection.create_connection(
File "/root/miniconda3/envs/tree/lib/python3.8/site-packages/urllib3/util/connection.py", line 85, in create_connection
raise err
File "/root/miniconda3/envs/tree/lib/python3.8/site-packages/urllib3/util/connection.py", line 73, in create_connection
sock.connect(sa)
socket.timeout: timed out
The above exception was the direct cause of the following exception:
Traceback (most recent call last): File "/root/miniconda3/envs/tree/lib/python3.8/site-packages/urllib3/connectionpool.py", line 789, in urlopen response = self._make_request( File "/root/miniconda3/envs/tree/lib/python3.8/site-packages/urllib3/connectionpool.py", line 490, in _make_request raise new_e File "/root/miniconda3/envs/tree/lib/python3.8/site-packages/urllib3/connectionpool.py", line 466, in _make_request self._validate_conn(conn) File "/root/miniconda3/envs/tree/lib/python3.8/site-packages/urllib3/connectionpool.py", line 1095, in _validate_conn conn.connect() File "/root/miniconda3/envs/tree/lib/python3.8/site-packages/urllib3/connection.py", line 615, in connect self.sock = sock = self._new_conn() File "/root/miniconda3/envs/tree/lib/python3.8/site-packages/urllib3/connection.py", line 205, in _new_conn raise ConnectTimeoutError( urllib3.exceptions.ConnectTimeoutError: (<urllib3.connection.HTTPSConnection object at 0x7f1124142a30>, 'Connection to huggingface.co timed out. (connect timeout=10)')
The above exception was the direct cause of the following exception:
Traceback (most recent call last): File "/root/miniconda3/envs/tree/lib/python3.8/site-packages/requests/adapters.py", line 667, in send resp = conn.urlopen( File "/root/miniconda3/envs/tree/lib/python3.8/site-packages/urllib3/connectionpool.py", line 843, in urlopen retries = retries.increment( File "/root/miniconda3/envs/tree/lib/python3.8/site-packages/urllib3/util/retry.py", line 519, in increment raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /stabilityai/stable-diffusion-2-1-base/resolve/main/scheduler/scheduler_config.json (Caused by ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0x7f1124142a30>, 'Connection to huggingface.co timed out. (connect timeout=10)'))
During handling of the above exception, another exception occurred:
Traceback (most recent call last): File "/root/miniconda3/envs/tree/lib/python3.8/site-packages/huggingface_hub/file_download.py", line 1751, in _get_metadata_or_catch_error metadata = get_hf_file_metadata( File "/root/miniconda3/envs/tree/lib/python3.8/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn return fn(args, kwargs) File "/root/miniconda3/envs/tree/lib/python3.8/site-packages/huggingface_hub/file_download.py", line 1673, in get_hf_file_metadata r = _request_wrapper( File "/root/miniconda3/envs/tree/lib/python3.8/site-packages/huggingface_hub/file_download.py", line 376, in _request_wrapper response = _request_wrapper( File "/root/miniconda3/envs/tree/lib/python3.8/site-packages/huggingface_hub/file_download.py", line 399, in _request_wrapper response = get_session().request(method=method, url=url, params) File "/root/miniconda3/envs/tree/lib/python3.8/site-packages/requests/sessions.py", line 589, in request resp = self.send(prep, send_kwargs) File "/root/miniconda3/envs/tree/lib/python3.8/site-packages/requests/sessions.py", line 703, in send r = adapter.send(request, kwargs) File "/root/miniconda3/envs/tree/lib/python3.8/site-packages/huggingface_hub/utils/_http.py", line 66, in send return super().send(request, args, **kwargs) File "/root/miniconda3/envs/tree/lib/python3.8/site-packages/requests/adapters.py", line 688, in send raise ConnectTimeout(e, request=request) requests.exceptions.ConnectTimeout: (MaxRetryError("HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /stabilityai/stable-diffusion-2-1-base/resolve/main/scheduler/scheduler_config.json (Caused by ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0x7f1124142a30>, 'Connection to huggingface.co timed out. (connect timeout=10)'))"), '(Request ID: 62eba637-30ac-4a71-80d1-0dec203aeded)')
The above exception was the direct cause of the following exception:
Traceback (most recent call last): File "/root/miniconda3/envs/tree/lib/python3.8/site-packages/diffusers/configuration_utils.py", line 379, in load_config config_file = hf_hub_download( File "/root/miniconda3/envs/tree/lib/python3.8/site-packages/huggingface_hub/utils/_deprecation.py", line 101, in inner_f return f(*args, *kwargs) File "/root/miniconda3/envs/tree/lib/python3.8/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn return fn(args, **kwargs) File "/root/miniconda3/envs/tree/lib/python3.8/site-packages/huggingface_hub/file_download.py", line 1240, in hf_hub_download return _hf_hub_download_to_cache_dir( File "/root/miniconda3/envs/tree/lib/python3.8/site-packages/huggingface_hub/file_download.py", line 1347, in _hf_hub_download_to_cache_dir _raise_on_head_call_error(head_call_error, force_download, local_files_only) File "/root/miniconda3/envs/tree/lib/python3.8/site-packages/huggingface_hub/file_download.py", line 1857, in _raise_on_head_call_error raise LocalEntryNotFoundError( huggingface_hub.utils._errors.LocalEntryNotFoundError: An error happened while trying to locate the file on the Hub and we cannot find the requested files in the local cache. Please check your connection and try again or make sure your Internet connection is on.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "run_tree_ring_watermark.py", line 218, in wandb.require("core")
! See https://wandb.me/wandb-core for more information.
Hi,I have synchronized the version of your code, but I am still getting an error。
Hi sorry for the late reply. Not sure if there's something wrong with the huggingface size, but could you try stabilityai/stable-diffusion-2-base
instead?
E:\miniconda3\envs\tree-ring-watermark-main\python.exe D:\tree-ring-watermark-main\tree-ring-watermark-main\run_tree_ring_watermark.py E:\miniconda3\envs\tree-ring-watermark-main\lib\site-packages\huggingface_hub\file_download.py:1150: FutureWarning: resume_download is deprecated and will be removed in version 1.0.0. Downloads always resume when possible. If you want to force a new download, use force_download=True. warnings.warn( Cannot initialize model with low cpu memory usage because accelerate was not found in the environment. Defaulting to low_cpu_mem_usage=False. It is strongly recommended to install accelerate for faster and less memory-intense model loading. You can do so with:
pip install accelerate . E:\miniconda3\envs\tree-ring-watermark-main\lib\site-packages\diffusers\modeling_utils.py:96: FutureWarning: You are using torch.load with weights_only=False (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for weights_only will be flipped to True. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via torch.serialization.add_safe_globals. We recommend you start setting weights_only=True for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature. return torch.load(checkpoint_file, map_location="cpu") E:\miniconda3\envs\tree-ring-watermark-main\lib\site-packages\transformers\modeling_utils.py:399: FutureWarning: You are using torch.load with weights_only=False (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for weights_only will be flipped to True. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via torch.serialization.add_safe_globals. We recommend you start setting weights_only=True for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature. return torch.load(checkpoint_file, map_location="cpu") Pipelines loaded with torch_dtype=torch.float16 cannot run with cpu device. It is not recommended to move them to cpu as running them will fail. Please make sure to use an accelerator to run the pipeline in inference, due to the lack of support forfloat16 operations on this device in PyTorch. Please, remove the torch_dtype=torch.float16 argument, or use another device for inference. Pipelines loaded with torch_dtype=torch.float16 cannot run with cpu device. It is not recommended to move them to cpu as running them will fail. Please make sure to use an accelerator to run the pipeline in inference, due to the lack of support forfloat16 operations on this device in PyTorch. Please, remove the torch_dtype=torch.float16 argument, or use another device for inference. Pipelines loaded with torch_dtype=torch.float16 cannot run with cpu device. It is not recommended to move them to cpu as running them will fail. Please make sure to use an accelerator to run the pipeline in inference, due to the lack of support forfloat16 operations on this device in PyTorch. Please, remove the torch_dtype=torch.float16 argument, or use another device for inference. Using the latest cached version of the dataset since Gustavosta/Stable-Diffusion-Prompts couldn't be found on the Hugging Face Hub Found the latest cached dataset configuration 'default' at C:\Users\宋雨轩.cache\huggingface\datasets\Gustavosta___stable-diffusion-prompts\default\0.0.0\d816d4a05cb89bde39dd99284c459801e1e7e69a (last modified on Wed Aug 21 16:50:04 2024). Traceback (most recent call last): File "D:\tree-ring-watermark-main\tree-ring-watermark-main\run_tree_ring_watermark.py", line 217, in main(args) File "D:\tree-ring-watermark-main\tree-ring-watermark-main\run_tree_ring_watermark.py", line 48, in main gt_patch = get_watermarking_pattern(pipe, args, device) File "D:\tree-ring-watermark-main\tree-ring-watermark-main\optim_utils.py", line 175, in get_watermarking_pattern gt_patch = torch.fft.fftshift(torch.fft.fft2(gt_init), dim=(-1, -2)) RuntimeError: Unsupported dtype Half
Hello, I encountered the above error while running run_tree_ring_watermark.py. How can I resolve this issue?