UserWarning: You have modified the pretrained model configuration to control generation. This is a deprecated strategy to control generation and will be removed soon, in a future version. Please use a generation configuration file (see https://huggingface.co/docs/transformers/main_classes/text_generation )
warnings.warn(
input 请提供三个管理时间的建议。
output
官方示例:
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("/home/dev/model/internlm-chat-7b-8k", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("/home/dev/model/internlm-chat-7b-8k", trust_remote_code=True).cuda()
model = model.eval()
response, history = model.chat(tokenizer, "请提供三个管理时间的建议。", history=[])
print(response)
UserWarning: You have modified the pretrained model configuration to control generation. This is a deprecated strategy to control generation and will be removed soon, in a future version. Please use a generation configuration file (see https://huggingface.co/docs/transformers/main_classes/text_generation ) warnings.warn( input 请提供三个管理时间的建议。 output
官方示例: from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("/home/dev/model/internlm-chat-7b-8k", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("/home/dev/model/internlm-chat-7b-8k", trust_remote_code=True).cuda() model = model.eval() response, history = model.chat(tokenizer, "请提供三个管理时间的建议。", history=[]) print(response)