Closed NtaylorOX closed 1 year ago
Ah sorry for that, the code is a bit dumb. You see the error messages because the converter expects checkpoints to contain identifiers for compiled model weights, but the checkpoint is saved without these.
If you run the load_local_model.py
function with impl.compile_torch=False
, it should work.
No problem - and I think you had already included those instructions somewhere, I just overlooked them. So that is on me.
I can close this issue for now, but one quick question. I am presuming with the modified architecture it will not currently work with the AutoClass from the HF library? I mainly ask as I saw the modelcard for your model: https://huggingface.co/JonasGeiping/crammed-bert implies you can.
Guessing part of the model card was autogenerated?
For now, to reload a model using the crammed-bert arch we need to use the codebase provided here?
Thanks again for the help and great repo
You can (if everything works correctly), if you import the cramming package first, as shown in the documentation. It will register the model as an additional AutoModelForMaskedLM
.
You are absolutely correct - and apologies for not noticing that and probably wasting your time :). Thanks again
No problem
Sorry to come back again - but have now ran into one (I presume final) issue. When you want to use the AutoModelForSequenceClassification as defined by the crammed library, but loading a model that has been pre-trained using the MLM objective - it does not seem to allow adjusting the num_labels via the normal arguments passing.
e.g.
from transformers import AutoModelForSequenceClassification, AutoTokenizer, AutoModelForMaskedLM, AutoConfig
classifier_model = AutoModelForSequenceClassification.from_pretrained("JonasGeiping/crammed-bert", num_labels = 2)
it ends up passing None to torch.linear as it is actually looking for num_labels in the config file rather than arguments.
TypeError: empty() received an invalid combination of arguments - got (tuple, dtype=NoneType, device=NoneType), but expected one of:
Any thoughts? My main desire is to use in a more straight forward fashion.
##########UPDATE#########
For now my crude fix is to replace the num_labels derivation inside crammed_bert.py from
self.num_labels = self.cfg.num_labels
Which uses the config created by the Omegconf class.
to:
self.num_labels = self.config.num_labels
Which uses the config provided from the AutoModel class.
It works - but doesn't seem ideal
Hm that seems like a reasonable fix for now. Really though, the whole translation between the hydra config that the model was originally trained with, and the config
that huggingface expects is not so ideal in the long run.
Sure - its no problem really, my use case is quite specific and need to move away from the hydra config is all. It's great work and generally it meshes fine with huggingface.
Great work and lovely repo. However, I am failing to push to HF using the provided load_local_model.py script.
I have a private dataset, and use the pre-training script successfuly via:
Trained fine - saved fine.
But when running - I just want to try pushing to hub for instance:
I get a whole lot of missing keys when trying to load the state dicts:
RuntimeError: Error(s) in loading state_dict for OptimizedModule: Missing key(s) in state_dict: "_orig_mod.encoder.embedding.word_embedding.weight", "_orig_mod.encoder.embedding.pos_embedding.scale_factor", "_orig_mod.encoder.embedding.norm.weight", "_orig_mod.encoder.embedding.norm.bias", "_orig_mod.encoder.layers.0.norm1.weight",....
and so on.
Is there anything obvious I am missing when trying to re-load the model?
Another question - is there a straight forward way to convert the current model files to that compatible with the HF transformers library, but locally rather than via hub?
Any help would be much appreciated. Package info below. Python 3.10.