adapter-hub / adapters

A Unified Library for Parameter-Efficient and Modular Transfer Learning
https://docs.adapterhub.ml
Apache License 2.0
2.56k stars 342 forks source link

`.loaded_embeddings`, `.active_embeddings`, and `.set_active_embeddings` should be exposed to the top-level model class from `.base_model` #382

Closed eugene-yang closed 2 years ago

eugene-yang commented 2 years ago

Environment info

Information

Model I am using (Bert, XLNet ...): XLMR

Language I am using the model on (English, Chinese ...):

Adapter setup I am using (if any):

The problem arises when using:

The tasks I am working on is:

To reproduce

Steps to reproduce the behavior:

from transformers import AutoAdapterModel, AutoTokenizer
model = AutoAdapterModel.from_pretrained('xlm-roberta-base')

model.base_model.loaded_embeddings 
# > {'default': Embedding(250002, 768, padding_idx=1)}
model.loaded_embeddings 
File ~/.conda/envs/adapter/lib/python3.10/site-packages/torch/nn/modules/module.py:1185, in Module.__getattr__(self, name)
   1183     if name in modules:
   1184         return modules[name]
-> 1185 raise AttributeError("'{}' object has no attribute '{}'".format(
   1186     type(self).__name__, name))

AttributeError: 'XLMRobertaAdapterModel' object has no attribute 'loaded_embeddings'
model.base_model.active_embeddings 
# > 'default'
model.active_embeddings 
File ~/.conda/envs/adapter/lib/python3.10/site-packages/torch/nn/modules/module.py:1185, in Module.__getattr__(self, name)
   1183     if name in modules:
   1184         return modules[name]
-> 1185 raise AttributeError("'{}' object has no attribute '{}'".format(
   1186     type(self).__name__, name))

AttributeError: 'XLMRobertaAdapterModel' object has no attribute 'active_embeddings'
tokenizer = AutoTokenizer.from_pretrained('xlm-roberta-base')
tokenizer_new = AutoTokenizer.from_pretrained('xlm-roberta-base')
tokenizer_new.add_tokens(['[unused1]'])

model.add_embeddings('new', tokenizer_new, reference_tokenizer=tokenizer, reference_embedding='default')
model.base_model.active_embeddings
# > 'new'

model.set_active_adapters('default')
File /expscratch/eyang/workspace/adapter/adapter-transformers/src/transformers/adapters/heads/base.py:641, in ModelWithFlexibleHeadsAdaptersMixin.set_active_adapters(self, adapter_setup, skip_layers)
    627 def set_active_adapters(
    628     self, adapter_setup: Union[list, AdapterCompositionBlock], skip_layers: Optional[List[int]] = None
    629 ):
    630     """
    631     Sets the adapter modules to be used by default in every forward pass. This setting can be overriden by passing
    632     the `adapter_names` parameter in the `foward()` pass. If no adapter with the given name is found, no module of
   (...)
    639             The list of adapters to be activated by default. Can be a fusion or stacking configuration.
    640     """
--> 641     self.base_model.set_active_adapters(adapter_setup, skip_layers)
    642     # use last adapter name as name of prediction head
    643     if self.active_adapters:

File /expscratch/eyang/workspace/adapter/adapter-transformers/src/transformers/adapters/model_mixin.py:358, in ModelAdaptersMixin.set_active_adapters(self, adapter_setup, skip_layers)
    356     for adapter_name in adapter_setup.flatten():
    357         if adapter_name not in self.config.adapters.adapters:
--> 358             raise ValueError(
    359                 f"No adapter with name '{adapter_name}' found. Please make sure that all specified adapters are correctly loaded."
    360             )
    362 # Make sure LoRA is reset
    363 self.reset_lora()

ValueError: No adapter with name 'default' found. Please make sure that all specified adapters are correctly loaded.

Expected behavior

As specified in the documentation, .loaded_embeddings, .active_embeddings, and .set_active_embeddings should be available to the top-level model class. At least should be the same as .add_embeddings. It is currently available through .base_model but not really ideal.

calpt commented 2 years ago

Thanks for reporting, this is indeed an unintended breaking change recently introduced in the master branch. Should be reversed with the merge of #386.

eugene-yang commented 2 years ago

Thanks, @calpt!