When calling the load_from_checkpoint function to load a model from a checkpoint, the hparams.yml file located in the parent folder does not get taken into account. For example, the pretrained_model setting in hparams.yml has no effect.
The roberta model is located in models/xlm-roberta-large as indicated by pretrained_model but an error is thrown because it still expects the roberta model to be in xlm-roberta-large in root. This gives the following error message:
EnvironmentError(
OSError: Can't load tokenizer for 'xlm-roberta-large'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'xlm-roberta-large' is the correct path to a directory containing all relevant files for a XLMRobertaTokenizerFast tokenizer.
Expected behaviour
I would expect that the pretrained_model parameter is used to determine the location of the model.
This could be achieved by adding the hparams_file as an argument to the model_class.load_from_checkpoint function in models/__init__.py
def load_from_checkpoint(checkpoint_path: str) -> CometModel:
"""Loads models from a checkpoint path.
Args:
checkpoint_path (str): Path to a model checkpoint.
Return:
COMET model.
"""
checkpoint_path = Path(checkpoint_path)
if not checkpoint_path.is_file():
raise Exception(f"Invalid checkpoint path: {checkpoint_path}")
parent_folder = checkpoint_path.parents[1] # .parent.parent
hparams_file = parent_folder / "hparams.yaml"
if hparams_file.is_file():
with open(hparams_file) as yaml_file:
hparams = yaml.load(yaml_file.read(), Loader=yaml.FullLoader)
model_class = str2model[hparams["class_identifier"]]
model = model_class.load_from_checkpoint(
checkpoint_path, load_pretrained_weights=False, hparams_file=hparams_file
)
return model
else:
raise Exception(f"hparams.yaml file is missing from {parent_folder}!")
🐛 Bug
When calling the
load_from_checkpoint
function to load a model from a checkpoint, thehparams.yml
file located in the parent folder does not get taken into account. For example, thepretrained_model
setting inhparams.yml
has no effect.To Reproduce
contents of hparams.yml:
The roberta model is located in
models/xlm-roberta-large
as indicated bypretrained_model
but an error is thrown because it still expects the roberta model to be inxlm-roberta-large
in root. This gives the following error message:Expected behaviour
I would expect that the
pretrained_model
parameter is used to determine the location of the model.This could be achieved by adding the
hparams_file
as an argument to themodel_class.load_from_checkpoint
function inmodels/__init__.py
Environment
OS: Linux Packaging: pip Version 2.0.1