报错:
Some weights of LlamaForCausalLM were not initialized from the model checkpoint at *path*and are newly initialized:
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
LlamaTokenizerFast(name_or_path='/data/mn/shibing624/MedicalGPT-1.6.3-231215/outputs/20240208_yi6B_tuluv2', vocab_size=64000, model_max_length=4096, is_fast=True, padding_side='left', truncation_side='right', special_tokens={'bos_token': '<|startoftext|>', 'eos_token': '<|endoftext|>', 'unk_token': '<unk>', 'pad_token': '<unk>'}, clean_up_tokenization_spaces=False), added_tokens_decoder={
0: AddedToken("<unk>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
1: AddedToken("<s>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
2: AddedToken("</s>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
64000: AddedToken("<|startoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
64001: AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
基于yi-6B模型,进行全参数SFT后,infer结果为空。transformer版本为4.37.2