Closed MonolithFoundation closed 4 weeks ago
Thanks for reporting. Next time, please share your system info (as requested in the contribution guide and in the issue template). It would have been especially relevant here.
You're most likely using Transformers v4.46, which is not compatible with TRL<v0.12 (about to be released). Make sure to downgrade transformers
pip install transformers"<=4.45"
OR
Upgrade to TRL>0.12 (this won't work before the release)
pip install trl">=0.12"
for ref, this issue has been solved in #2246
Hi, am using transformers 4.47 and trl 0.11.4
Could u indicates me when would 0.12 release and why this error happens for trl 0.12?
Thanks for reporting. Next time, please share your system info (as requested in the contribution guide and in the issue template). It would have been especially relevant here.
You're most likely using Transformers v4.46, which is not compatible with TRL<v0.12 (about to be released). Make sure to downgrade transformers
pip install transformers"<=4.45"
OR
Upgrade to TRL>0.12 (this won't work before the release)
pip install trl">=0.12"
for ref, this issue has been solved in #2246
Worked for me as well. i was using unsloth and getting this error.
I still didn't get the root reason for this. the APi changes so rapidly
In our trl
trainers, we had the following method:
def get_batch_samples(self, model, batch):
However, with the recent addition in Hugging Face Transformers PR #34198, Trainer
now includes a new get_batch_samples
method:
def get_batch_samples(self, epoch_iterator, num_batches):
This new method has the same name but a different purpose and parameter structure.
Since our trl
trainer inherits from the Transformers Trainer
class, our original get_batch_samples
method in trl
is unintentionally overriding the new method in Trainer
. This causes a conflict: when self.get_batch_samples(epoch_iterator, num_batches)
is called, it actually tries to use our trl
method signature (get_batch_samples(model, batch)
) instead. This results in the following:
epoch_iterator
(expected by the new method as a generator) is passed as the model
parameter.num_batches
(expected as an integer) is passed as the batch
parameter.Consequently, when the method tries to execute model.generate(...)
, it raises an AttributeError
because model
is now a generator (inherited from epoch_iterator
) rather than an expected model with a .generate
method. This leads to the error:
policy_output = model.generate(
^^^^^^^^^^^^^^
AttributeError: 'generator' object has no attribute 'generate'
To resolve this, we needed to rename the method in #2246
@qgallouedec So it is! However, after I upgraded trl to master branch, the error still persist why
Please share your system info with trl env
I am getting this error as well and I am also confused with the versions, backward compatibility, and the fix. What is the combination of transformers
and trl
libraries that resolves this issue? (which versions should we install for these 2 libraries so we don't see the error today)
Installed from the master and it worked. tnx
pip install --upgrade trl
trl dpo AttributeError: 'generator' object has no attribute 'generate'
the
model
should be normal, why it keeping prints error: