Closed adeerkhan closed 8 months ago
There seems to be an import error: ImportError: Using bitsandbytes 8-bit quantization requires Accelerate: pip install accelerate and the latest version of bitsandbytes: pip install -i https://pypi.org/simple/ bitsandbytes
Just follow these steps and check.
Yes, you're right. The FileNotFoundError: No such file or directory: '.threestudio_cache/text_embeddings/xx.pt'
seems to be caused by the lack of bitsandbytes. We cloned a new repo, installed it by pip install bitsandbytes>=0.39.0
, and re-tested it. Everything seems to be fine now.
We've also updated it in the requirements.txt
file. Thanks for pointing this out!
I am still getting this error:
(GraphDreamer) root@1ca31eeab2f7:/workspace/GraphDreamer# bash scripts/blue_jay.sh examples/gd-if/blue_jay Seed set to 630 Using Vanilla MLP: Using Vanilla MLP: [INFO] Loading Deep Floyd ...
A mixture of fp16 and non-fp16 filenames will be loaded. Loaded fp16 filenames: [text_encoder/model.fp16-00002-of-00002.safetensors, safety_checker/model.fp16.safetensors, text_encoder/model.fp16-00001-of-00002.safetensors, unet/diffusion_pytorch_model.fp16.safetensors] Loaded non-fp16 filenames: [watermarker/diffusion_pytorch_model.safetensors If this behavior is not expected, please check your folder structure. Loading pipeline components...: 0%| | 0/3 [00:00<?, ?it/s]You are using the default legacy behaviour of the <class 'transformers.models.t5.tokenization_t5.T5Tokenizer'>. This is expected, and simply means that the
main()
File "/workspace/GraphDreamer/launch.py", line 109, in main
system: BaseSystem = threestudio.find(cfg.system_type)(
File "/workspace/GraphDreamer/threestudio/systems/base.py", line 40, in init
self.configure()
File "/workspace/GraphDreamer/threestudio/systems/gdreamer.py", line 63, in configure
prompt_processor = threestudio.find(self.cfg.prompt_processor_type)(self.cfg.prompt_processor)
File "/workspace/GraphDreamer/threestudio/utils/base.py", line 83, in init
self.configure( args, kwargs)
File "/workspace/GraphDreamer/threestudio/models/prompt_processors/base.py", line 343, in configure
self.load_text_embeddings()
File "/workspace/GraphDreamer/threestudio/models/prompt_processors/base.py", line 403, in load_text_embeddings
self.text_embeddings = self.load_from_cache(self.prompt)[None, ...]
File "/workspace/GraphDreamer/threestudio/models/prompt_processors/base.py", line 436, in load_from_cache
return torch.load(cache_path, map_location=self.device)
File "/workspace/GraphDreamer/venv/GraphDreamer/lib/python3.10/site-packages/torch/serialization.py", line 791, in load
with _open_file_like(f, 'rb') as opened_file:
File "/workspace/GraphDreamer/venv/GraphDreamer/lib/python3.10/site-packages/torch/serialization.py", line 271, in _open_file_like
return _open_file(name_or_buffer, mode)
File "/workspace/GraphDreamer/venv/GraphDreamer/lib/python3.10/site-packages/torch/serialization.py", line 252, in init
super().init(open(name, mode))
FileNotFoundError: [Errno 2] No such file or directory: '.threestudio_cache/text_embeddings/34125bb66f96f950ae28099d07488e81f93deb0270e23c6c8d70a8a18a57bcde59884682798acdce2ba4651299883fd38087bb4dfbee2e43d56b34e1b4a1ba6e.pt'
examples/gd-sd-refine/blue_jay
Seed set to 630
Using Vanilla MLP:
Using Vanilla MLP:
[INFO] Loading Stable Diffusion ...
Loading pipeline components...: 100%|████████████████████████████████████████████████████████████████████████████| 4/4 [00:16<00:00, 4.14s/it]
[INFO] Loaded Stable Diffusion!
[INFO] Using prompt [a DSLR photo of a blue jay standing on a large basket of rainbow macarons, 4K high-resolution high-quality] and negative prompt [ugly, bad anatomy, blurry, pixelated obscure, unnatural colors, poor lighting, dull, and unclear, cropped, lowres, low quality, artifacts, duplicate, morbid, mutilated, poorly drawn face, deformed, dehydrated, bad proportions]
[INFO] Using view-dependent prompts [side]:[a DSLR photo of a blue jay standing on a large basket of rainbow macarons, 4K high-resolution high-quality, side view] [front]:[a DSLR photo of a blue jay standing on a large basket of rainbow macarons, 4K high-resolution high-quality, front view] [back]:[a DSLR photo of a blue jay standing on a large basket of rainbow macarons, 4K high-resolution high-quality, back view] [overhead]:[a DSLR photo of a blue jay standing on a large basket of rainbow macarons, 4K high-resolution high-quality, overhead view]
[WARNING] Config 'system.edge_list' for edge rendering is not provided (not necessary for two- and three-object scenes). Use default (cyclic) graph.
['a DSLR photo of a blue jay', '4K high-resolution high-quality']
['a DSLR photo of a large basket of rainbow macarons', '4K high-resolution high-quality']
[INFO] Object 0 prompts [pos]:[a DSLR photo of a blue jay, 4K high-resolution high-quality] [neg]:[a DSLR photo of a large basket of rainbow macarons]
[INFO] Using prompt [a DSLR photo of a blue jay, 4K high-resolution high-quality] and negative prompt [a DSLR photo of a large basket of rainbow macarons]
[INFO] Using view-dependent prompts [side]:[a DSLR photo of a blue jay, 4K high-resolution high-quality, side view] [front]:[a DSLR photo of a blue jay, 4K high-resolution high-quality, front view] [back]:[a DSLR photo of a blue jay, 4K high-resolution high-quality, back view] [overhead]:[a DSLR photo of a blue jay, 4K high-resolution high-quality, overhead view]
[INFO] Object 1 prompts [pos]:[a DSLR photo of a large basket of rainbow macarons, 4K high-resolution high-quality] [neg]:[a DSLR photo of a blue jay]
[INFO] Using prompt [a DSLR photo of a large basket of rainbow macarons, 4K high-resolution high-quality] and negative prompt [a DSLR photo of a blue jay]
[INFO] Using view-dependent prompts [side]:[a DSLR photo of a large basket of rainbow macarons, 4K high-resolution high-quality, side view] [front]:[a DSLR photo of a large basket of rainbow macarons, 4K high-resolution high-quality, front view] [back]:[a DSLR photo of a large basket of rainbow macarons, 4K high-resolution high-quality, back view] [overhead]:[a DSLR photo of a large basket of rainbow macarons, 4K high-resolution high-quality, overhead view]
[INFO] Using 16bit Automatic Mixed Precision (AMP)
[INFO] GPU available: True (cuda), used: True
[INFO] TPU available: False, using: 0 TPU cores
[INFO] IPU available: False, using: 0 IPUs
[INFO] HPU available: False, using: 0 HPUs
[INFO] You are using a CUDA device ('NVIDIA RTX A5000') that has Tensor Cores. To properly utilize them, you should set
main()
File "/workspace/GraphDreamer/launch.py", line 165, in main
trainer.fit(system, datamodule=dm, ckpt_path=cfg.resume)
File "/workspace/GraphDreamer/venv/GraphDreamer/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 544, in fit
call._call_and_handle_interrupt(
File "/workspace/GraphDreamer/venv/GraphDreamer/lib/python3.10/site-packages/pytorch_lightning/trainer/call.py", line 44, in _call_and_handle_interrupt
return trainer_fn(*args, **kwargs)
File "/workspace/GraphDreamer/venv/GraphDreamer/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 580, in _fit_impl
self._run(model, ckpt_path=ckpt_path)
File "/workspace/GraphDreamer/venv/GraphDreamer/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 956, in _run
self._checkpoint_connector._restore_modules_and_callbacks(ckpt_path)
File "/workspace/GraphDreamer/venv/GraphDreamer/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/checkpoint_connector.py", line 397, in _restore_modules_and_callbacks
self.resume_start(checkpoint_path)
File "/workspace/GraphDreamer/venv/GraphDreamer/lib/python3.10/site-packages/pytorch_lightning/trainer/connectors/checkpoint_connector.py", line 79, in resume_start
loaded_checkpoint = self.trainer.strategy.load_checkpoint(checkpoint_path)
File "/workspace/GraphDreamer/venv/GraphDreamer/lib/python3.10/site-packages/pytorch_lightning/strategies/strategy.py", line 368, in load_checkpoint
return self.checkpoint_io.load_checkpoint(checkpoint_path)
File "/workspace/GraphDreamer/venv/GraphDreamer/lib/python3.10/site-packages/lightning_fabric/plugins/io/torch_io.py", line 81, in load_checkpoint
raise FileNotFoundError(f"Checkpoint file not found: {path}")
FileNotFoundError: Checkpoint file not found: examples/gd-if/blue_jay/ckpts/last.ckpt
legacy
(previous) behavior will be used so nothing changes for you. If you want to use the new behaviour, setlegacy=False
. This should only be set if you understand what it means, and thoroughly read the reason why this was added as explained in https://github.com/huggingface/transformers/pull/24565 Loading pipeline components...: 100%|████████████████████████████████████████████████████████████████████████████| 3/3 [00:09<00:00, 3.22s/it] [INFO] PyTorch2.0 uses memory efficient attention by default. [INFO] Loaded Deep Floyd! [INFO] Using prompt [a DSLR photo of a blue jay standing on a large basket of rainbow macarons] and negative prompt [ugly, bad anatomy, blurry, pixelated obscure, unnatural colors, poor lighting, dull, and unclear, cropped, lowres, low quality, artifacts, duplicate, morbid, mutilated, poorly drawn face, deformed, dehydrated, bad proportions] [INFO] Using view-dependent prompts [side]:[a DSLR photo of a blue jay standing on a large basket of rainbow macarons, side view] [front]:[a DSLR photo of a blue jay standing on a large basket of rainbow macarons, front view] [back]:[a DSLR photo of a blue jay standing on a large basket of rainbow macarons, back view] [overhead]:[a DSLR photo of a blue jay standing on a large basket of rainbow macarons, overhead view] You are using the default legacy behaviour of the <class 'transformers.models.t5.tokenization_t5.T5Tokenizer'>. This is expected, and simply means that thelegacy
(previous) behavior will be used so nothing changes for you. If you want to use the new behaviour, setlegacy=False
. This should only be set if you understand what it means, and thoroughly read the reason why this was added as explained in https://github.com/huggingface/transformers/pull/24565 Theload_in_4bit
andload_in_8bit
arguments are deprecated and will be removed in the future versions. Please, pass aBitsAndBytesConfig
object inquantization_config
argument instead. Process SpawnProcess-1: Traceback (most recent call last): File "/usr/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap self.run() File "/usr/lib/python3.10/multiprocessing/process.py", line 108, in run self._target(*self._args, self._kwargs) File "/workspace/GraphDreamer/threestudio/models/prompt_processors/deepfloyd_prompt_processor.py", line 63, in spawn_func text_encoder = T5EncoderModel.from_pretrained( File "/workspace/GraphDreamer/venv/GraphDreamer/lib/python3.10/site-packages/transformers/modeling_utils.py", line 3024, in from_pretrained hf_quantizer.validate_environment( File "/workspace/GraphDreamer/venv/GraphDreamer/lib/python3.10/site-packages/transformers/quantizers/quantizer_bnb_8bit.py", line 62, in validate_environment raise ImportError( ImportError: Usingbitsandbytes
8-bit quantization requires Accelerate:pip install accelerate
and the latest version of bitsandbytes:pip install -i https://pypi.org/simple/ bitsandbytes
You are using the default legacy behaviour of the <class 'transformers.models.t5.tokenization_t5.T5Tokenizer'>. This is expected, and simply means that thelegacy
(previous) behavior will be used so nothing changes for you. If you want to use the new behaviour, setlegacy=False
. This should only be set if you understand what it means, and thoroughly read the reason why this was added as explained in https://github.com/huggingface/transformers/pull/24565 Theload_in_4bit
andload_in_8bit
arguments are deprecated and will be removed in the future versions. Please, pass aBitsAndBytesConfig
object inquantization_config
argument instead. Process SpawnProcess-2: Traceback (most recent call last): File "/usr/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap self.run() File "/usr/lib/python3.10/multiprocessing/process.py", line 108, in run self._target(*self._args, *self._kwargs) File "/workspace/GraphDreamer/threestudio/models/prompt_processors/deepfloyd_prompt_processor.py", line 63, in spawn_func text_encoder = T5EncoderModel.from_pretrained( File "/workspace/GraphDreamer/venv/GraphDreamer/lib/python3.10/site-packages/transformers/modeling_utils.py", line 3024, in from_pretrained hf_quantizer.validate_environment( File "/workspace/GraphDreamer/venv/GraphDreamer/lib/python3.10/site-packages/transformers/quantizers/quantizer_bnb_8bit.py", line 62, in validate_environment raise ImportError( ImportError: Usingbitsandbytes
8-bit quantization requires Accelerate:pip install accelerate
and the latest version of bitsandbytes:pip install -i https://pypi.org/simple/ bitsandbytes
Traceback (most recent call last): File "/workspace/GraphDreamer/launch.py", line 181, intorch.set_float32_matmul_precision('medium' | 'high')
which will trade-off precision for performance. For more details, read https://pytorch.org/docs/stable/generated/torch.set_float32_matmul_precision.html#torch.set_float32_matmul_precision [INFO] Restoring states from the checkpoint path at examples/gd-if/blue_jay/ckpts/last.ckpt Traceback (most recent call last): File "/workspace/GraphDreamer/launch.py", line 181, in