Closed vincent507cpu closed 1 week ago
You can exclude these
tokenizer_outputs_to_remove=["token_type_ids"]
(This is also the silliest error in huggingface, I almost want to make a pr there to change this)
Thank you so much!
I have another issue, in tutorial, the initialization of llm
is:
llm = HuggingFaceLLM(
model_name='meta-llama/Meta-Llama-3-8B-Instruct',
model_kwargs={
'token':hf_token,
'torch_type':torch.bfloat16,
},
tokenizer_outputs_to_remove=["token_type_ids"],
generate_kwargs={
'do_sample':True,
'temperature':0.6,
'top_p':0.9,
},
tokenizer_name='BAAI/bge-base-en-v1.5',
tokenizer_kwargs={'token':hf_token},
stopping_ids=stopping_ids
)
'torch_type':torch.bfloat16,
causes an error TypeError: LlamaForCausalLM.__init__() got an unexpected keyword argument 'torch_type'
. I can comment out this line. stopping_ids=stopping_ids
causes
ValidationError: 1 validation error for HuggingFaceLLM
stopping_ids -> 0
none is not an allowed value (type=type_error.none.not_allowed)
If I comment out this line, the error will be gone. I was wondering does this line matter? Thank you again for your kind help! @logan-markewich
Bug Description
Version
0.10.38
Steps to Reproduce
following Llama3 Cookbook.
Then the error occurs.
Relevant Logs/Tracbacks
No response