Closed Maddy12 closed 2 years ago
My hack based on the current version of transformers
is the following:
tokenizer = AutoTokenizer.from_pretrained(model_name, cache_dir=args.model_path,
add_special_tokens=args.add_special_tokens)
config = BertConfig.from_pretrained(model_name, output_hidden_states=True)
model = BertModel.from_pretrained(model_name, config=config)
and to no longer pass add_special_tokens
in function TextConverterDataset
when calling:
sentence_tokens_str = self.tokenizer.tokenize(sentence) # , add_special_tokens=self.add_special_tokens)
I see the tranformers version now in requrements_frozen.txt
. The hack may be useful if want to update to more recent versions. Thanks!
Describe the bug When attempting to run
precompute_text.py
, there is an issue.To Reproduce
Screenshots
The current output of
model_outputs
is a tuple:System Info: transformers==3.4.0
Additional context I am sure it is an issue with the versioning of
transformers
, so package information on what version is required would be helpful as it is not indicated in therequirements.txt