I am looking into loading the model and the tokenizer after it has been trained on a custom dataset. After training, I am able to produce pytorch_model.bin, config.json, tokenizer_config.json, special_tokens_map.json, training_args.bin, and vocab.txt for every checkpoint saved.
Is there any script where I can know how to load the saved checkpoints along with the tokenizer just like the example that you have provided here for your pre-trained model
I am looking into loading the model and the tokenizer after it has been trained on a custom dataset. After training, I am able to produce
pytorch_model.bin
,config.json
,tokenizer_config.json
,special_tokens_map.json
,training_args.bin
, andvocab.txt
for every checkpoint saved.Is there any script where I can know how to load the saved checkpoints along with the tokenizer just like the example that you have provided here for your pre-trained model
Thanks for the awesome repo!