Open christianjosef27 opened 11 months ago
I now have an idea why it does not work. I had a different version of tranformers on linux where i trained (4.35.2), in contrast in windows i have transformers==4.29 which might be the problem when loading the state_dict.
Version pyabsa==2.3.1 torch==1.13.0 transformers==4.29.0
Describe the bug I used to load my custom state_dict from my windows system and the loading procedure worked. However, I am training now on a linux server for resource reasons. I trained a sample model and copied the whole folder containing the model files to my windows system (.args, config, .state_dict, .tokenizer). Now I tried to load that model in the same way as always but I get errors: (refer to Screenshot for details).
RuntimeError: Error(s) in loading state_dict for FAST_LCF_ATEPC: Missing key(s) in state_dict: "bert4global.embeddings.position_ids".
if not hasattr(ATEPCModelList, self.model.class.name): raise KeyError( "The checkpoint you are loading is not from any ATEPC model." )
Code To Reproduce aspect_extractor = ATEPC.AspectExtractor('fast_lcf_atepc_custom_dataset_cdw_apcacc_75.0_apcf1_74.31_atef1_40.45', auto_device=True, # False means load model on CPU cal_perplexity=True, )
Expected behavior I expect the program to load my custom checkpoint/saved_state_dict.
Screenshots