threestudio-project / threestudio

A unified framework for 3D content generation.
Apache License 2.0
6.32k stars 480 forks source link

RuntimeError: Only Tensors of floating point and complex dtype can require gradients #260

Closed quintendewilde closed 1 year ago

quintendewilde commented 1 year ago

I had this working a week ago. Started it up today and get this error.

What's changed in the Colab version? I'm running the latest from this github page!

Downloading model.8bit.safetensors: 100% 7.92G/7.92G [00:49<00:00, 161MB/s] You are loading your model in 8bit or 4bit but no linear modules were found in your model. Please double check your model architecture, or submit an issue on github if you think this is a bug. Traceback (most recent call last): File "/content/threestudio/launch.py", line 237, in <module> main(args, extras) File "/content/threestudio/launch.py", line 180, in main trainer.fit(system, datamodule=dm, ckpt_path=cfg.resume) File "/usr/local/lib/python3.10/dist-packages/pytorch_lightning/trainer/trainer.py", line 529, in fit call._call_and_handle_interrupt( File "/usr/local/lib/python3.10/dist-packages/pytorch_lightning/trainer/call.py", line 42, in _call_and_handle_interrupt return trainer_fn(*args, **kwargs) File "/usr/local/lib/python3.10/dist-packages/pytorch_lightning/trainer/trainer.py", line 568, in _fit_impl self._run(model, ckpt_path=ckpt_path) File "/usr/local/lib/python3.10/dist-packages/pytorch_lightning/trainer/trainer.py", line 954, in _run call._call_lightning_module_hook(self, "on_fit_start") File "/usr/local/lib/python3.10/dist-packages/pytorch_lightning/trainer/call.py", line 144, in _call_lightning_module_hook output = fn(*args, **kwargs) File "/content/threestudio/threestudio/systems/dreamfusion.py", line 32, in on_fit_start self.prompt_processor = threestudio.find(self.cfg.prompt_processor_type)( File "/content/threestudio/threestudio/utils/base.py", line 63, in __init__ self.configure(*args, **kwargs) File "/content/threestudio/threestudio/models/prompt_processors/base.py", line 335, in configure self.prepare_text_embeddings() File "/usr/local/lib/python3.10/dist-packages/lightning_utilities/core/rank_zero.py", line 32, in wrapped_fn return fn(*args, **kwargs) File "/content/threestudio/threestudio/models/prompt_processors/base.py", line 382, in prepare_text_embeddings self.spawn_func( File "/content/threestudio/threestudio/models/prompt_processors/deepfloyd_prompt_processor.py", line 61, in spawn_func text_encoder = T5EncoderModel.from_pretrained( File "/usr/local/lib/python3.10/dist-packages/transformers/modeling_utils.py", line 2903, in from_pretrained ) = cls._load_pretrained_model( File "/usr/local/lib/python3.10/dist-packages/transformers/modeling_utils.py", line 3260, in _load_pretrained_model new_error_msgs, offload_index, state_dict_index = _load_state_dict_into_meta_model( File "/usr/local/lib/python3.10/dist-packages/transformers/modeling_utils.py", line 725, in _load_state_dict_into_meta_model set_module_quantized_tensor_to_device( File "/usr/local/lib/python3.10/dist-packages/transformers/utils/bitsandbytes.py", line 116, in set_module_quantized_tensor_to_device new_value = nn.Parameter(new_value, requires_grad=old_value.requires_grad) File "/usr/local/lib/python3.10/dist-packages/torch/nn/parameter.py", line 36, in __new__ return torch.Tensor._make_subclass(cls, data, requires_grad) RuntimeError: Only Tensors of floating point and complex dtype can require gradients

bennyguo commented 1 year ago

Probably due to the transformers or bitsandbytes version. Will fix later.

thuliu-yt16 commented 1 year ago

Yes, you can try transformers==4.28.1 and bitsandbytes==0.38.1. That works for me.

XLionXL commented 1 year ago

Yes, you can try transformers==4.28.1 and bitsandbytes==0.38.1. That works for me.

Thanks, i run the code successfully after adopt your advice!