loubnabnl / santacoder-finetuning

Fine-tune SantaCoder for Code/Text Generation.
Apache License 2.0
179 stars 22 forks source link

How to inference the model #25

Open athmanar opened 5 months ago

athmanar commented 5 months ago

Hi, I have a question. When we use local finetuning we produce checkpoints. If we wish to perform inference on these models how can we do that?

model_name="checkpoint-9000" tokenizer = AutoTokenizer.from_pretrained(model_name) # checkpoint-900

OSError: Can't load tokenizer for 'checkpoint-9000'. If you were trying to load.. It appears the tokenizers do not get saved.

athmanar commented 5 months ago

I could do something like this: Here checkpoint-9000 is a folder where the checkpoint is stored during training..

 from transformers import AutoTokenizer, AutoModelForCausalLM
 tokenizer = AutoTokenizer.from_pretrained("bigcode/santacoder") # checkpoint-9000
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
>>> model_name="checkpoint-9000"
>>> model = AutoModelForCausalLM.from_pretrained(model_name).cuda() # checkpoint-900

Some weights of GPT2LMHeadModel were not initialized from the model checkpoint at checkpoint-9000 and are newly initialized: ['transformer.h.0.attn.c_attn.bias', 'transformer.h.0.attn.c_attn.weight', 'transformer.h.1.attn.c_attn.bias', 'transformer.h.1.attn.c_attn.weight', 'transformer.h.10.attn.c_attn.bias', 'transformer.h.10.attn.c_attn.weight', 'transformer.h.11.attn.c_attn.bias', 'transformer.h.11.attn.c_attn.weight', 'transformer.h.12.attn.c_attn.bias', 'transformer.h.12.attn.c_attn.weight', 'transformer.h.13.attn.c_attn.bias', 'transformer.h.13.attn.c_attn.weight', 'transformer.h.14.attn.c_attn.bias', 'transformer.h.14.attn.c_attn.weight', 'transformer.h.15.attn.c_attn.bias', 'transformer.h.15.attn.c_attn.weight', 'transformer.h.16.attn.c_attn.bias', 'transformer.h.16.attn.c_attn.weight', 'transformer.h.17.attn.c_attn.bias', 'transformer.h.17.attn.c_attn.weight', 'transformer.h.18.attn.c_attn.bias', 'transformer.h.18.attn.c_attn.weight', 'transformer.h.19.attn.c_attn.bias', 'transformer.h.19.attn.c_attn.weight', 'transformer.h.2.attn.c_attn.bias', 'transformer.h.2.attn.c_attn.weight', 'transformer.h.20.attn.c_attn.bias', 'transformer.h.20.attn.c_attn.weight', 'transformer.h.21.attn.c_attn.bias', 'transformer.h.21.attn.c_attn.weight', 'transformer.h.22.attn.c_attn.bias', 'transformer.h.22.attn.c_attn.weight', 'transformer.h.23.attn.c_attn.bias', 'transformer.h.23.attn.c_attn.weight', 'transformer.h.3.attn.c_attn.bias', 'transformer.h.3.attn.c_attn.weight', 'transformer.h.4.attn.c_attn.bias', 'transformer.h.4.attn.c_attn.weight', 'transformer.h.5.attn.c_attn.bias', 'transformer.h.5.attn.c_attn.weight', 'transformer.h.6.attn.c_attn.bias', 'transformer.h.6.attn.c_attn.weight', 'transformer.h.7.attn.c_attn.bias', 'transformer.h.7.attn.c_attn.weight', 'transformer.h.8.attn.c_attn.bias', 'transformer.h.8.attn.c_attn.weight', 'transformer.h.9.attn.c_attn.bias', 'transformer.h.9.attn.c_attn.weight'] You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.

I do not understand why this warning is given. We are training all the layers correct?