Language I am using the model on (English, Chinese ...):
Adapter setup I am using (if any):
The problem arises when using:
[ ] the official example scripts: (give details below)
[ ] my own modified scripts: (give details below)
The tasks I am working on is:
[ ] an official GLUE/SQUaD task: (give the name)
[ ] my own task or dataset: (give details below)
To reproduce
Steps to reproduce the behavior:
from transformers import AutoAdapterModel, AutoTokenizer
model = AutoAdapterModel.from_pretrained('xlm-roberta-base')
model.base_model.loaded_embeddings
# > {'default': Embedding(250002, 768, padding_idx=1)}
model.loaded_embeddings
File ~/.conda/envs/adapter/lib/python3.10/site-packages/torch/nn/modules/module.py:1185, in Module.__getattr__(self, name)
1183 if name in modules:
1184 return modules[name]
-> 1185 raise AttributeError("'{}' object has no attribute '{}'".format(
1186 type(self).__name__, name))
AttributeError: 'XLMRobertaAdapterModel' object has no attribute 'loaded_embeddings'
File ~/.conda/envs/adapter/lib/python3.10/site-packages/torch/nn/modules/module.py:1185, in Module.__getattr__(self, name)
1183 if name in modules:
1184 return modules[name]
-> 1185 raise AttributeError("'{}' object has no attribute '{}'".format(
1186 type(self).__name__, name))
AttributeError: 'XLMRobertaAdapterModel' object has no attribute 'active_embeddings'
File /expscratch/eyang/workspace/adapter/adapter-transformers/src/transformers/adapters/heads/base.py:641, in ModelWithFlexibleHeadsAdaptersMixin.set_active_adapters(self, adapter_setup, skip_layers)
627 def set_active_adapters(
628 self, adapter_setup: Union[list, AdapterCompositionBlock], skip_layers: Optional[List[int]] = None
629 ):
630 """
631 Sets the adapter modules to be used by default in every forward pass. This setting can be overriden by passing
632 the `adapter_names` parameter in the `foward()` pass. If no adapter with the given name is found, no module of
(...)
639 The list of adapters to be activated by default. Can be a fusion or stacking configuration.
640 """
--> 641 self.base_model.set_active_adapters(adapter_setup, skip_layers)
642 # use last adapter name as name of prediction head
643 if self.active_adapters:
File /expscratch/eyang/workspace/adapter/adapter-transformers/src/transformers/adapters/model_mixin.py:358, in ModelAdaptersMixin.set_active_adapters(self, adapter_setup, skip_layers)
356 for adapter_name in adapter_setup.flatten():
357 if adapter_name not in self.config.adapters.adapters:
--> 358 raise ValueError(
359 f"No adapter with name '{adapter_name}' found. Please make sure that all specified adapters are correctly loaded."
360 )
362 # Make sure LoRA is reset
363 self.reset_lora()
ValueError: No adapter with name 'default' found. Please make sure that all specified adapters are correctly loaded.
Expected behavior
As specified in the documentation, .loaded_embeddings, .active_embeddings, and .set_active_embeddings should be available to the top-level model class. At least should be the same as .add_embeddings.
It is currently available through .base_model but not really ideal.
Thanks for reporting, this is indeed an unintended breaking change recently introduced in the master branch. Should be reversed with the merge of #386.
Environment info
adapter-transformers
version: 3.0.1+ (commit 11bd9d27962ccd8909a531544b0feba324a17388)Information
Model I am using (Bert, XLNet ...): XLMR
Language I am using the model on (English, Chinese ...):
Adapter setup I am using (if any):
The problem arises when using:
The tasks I am working on is:
To reproduce
Steps to reproduce the behavior:
Expected behavior
As specified in the documentation,
.loaded_embeddings
,.active_embeddings
, and.set_active_embeddings
should be available to the top-level model class. At least should be the same as.add_embeddings
. It is currently available through.base_model
but not really ideal.