Open muelletm opened 1 year ago
me too .if you solve please contact with me
Same issue here:
>>> transformers.__version__
'4.37.2'
>>>
Traceback (most recent call last):
File "/home/careifai/Documents/GitHub/qlora/qlora.py", line 850, in <module>
train()
File "/home/careifai/Documents/GitHub/qlora/qlora.py", line 812, in train
train_result = trainer.train()
^^^^^^^^^^^^^^^
File "/home/careifai/anaconda3/envs/careifai/lib/python3.11/site-packages/transformers/trainer.py", line 1539, in train
return inner_training_loop(
^^^^^^^^^^^^^^^^^^^^
File "/home/careifai/anaconda3/envs/careifai/lib/python3.11/site-packages/transformers/trainer.py", line 1869, in _inner_training_loop
tr_loss_step = self.training_step(model, inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/careifai/anaconda3/envs/careifai/lib/python3.11/site-packages/transformers/trainer.py", line 2772, in training_step
loss = self.compute_loss(model, inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/careifai/anaconda3/envs/careifai/lib/python3.11/site-packages/transformers/trainer.py", line 2795, in compute_loss
outputs = model(**inputs)
^^^^^^^^^^^^^^^
File "/home/careifai/anaconda3/envs/careifai/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/careifai/anaconda3/envs/careifai/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/careifai/anaconda3/envs/careifai/lib/python3.11/site-packages/peft/peft_model.py", line 922, in forward
return self.base_model(
^^^^^^^^^^^^^^^^
File "/home/careifai/anaconda3/envs/careifai/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/careifai/anaconda3/envs/careifai/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/careifai/anaconda3/envs/careifai/lib/python3.11/site-packages/accelerate/hooks.py", line 165, in new_forward
output = old_forward(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/careifai/anaconda3/envs/careifai/lib/python3.11/site-packages/transformers/models/gpt2/modeling_gpt2.py", line 1093, in forward
torch.cuda.set_device(self.transformer.first_device)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/careifai/anaconda3/envs/careifai/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1688, in __getattr__
raise AttributeError(f"'{type(self).__name__}' object has no attribute '{name}'")
AttributeError: 'GPT2Model' object has no attribute 'first_device'
Anyone has any solution?
It looks like EleutherAI/gpt-j-6b is not supported:
Env:
Running from docker:
Cmd:
Stacktrace: