Facico / Chinese-Vicuna

Chinese-Vicuna: A Chinese Instruction-following LLaMA-based Model —— 一个中文低资源的llama+lora方案,结构参考alpaca
https://github.com/Facico/Chinese-Vicuna
Apache License 2.0
4.14k stars 421 forks source link

不能生成performance里面的结果 #62

Closed greedyint closed 1 year ago

greedyint commented 1 year ago

您好, 我尝试运行了一下, 但是好像出不来中文问题的结果, 就好像lora没有运行一样, 我不知道为什么.

我发一下一些运行改动的参数和generate.py中生成那部份的代码: llama-7b lorapath: ./lora-Vicuna/checkpoint-final

newgeneration_config = GenerationConfig( temperature=0.1, top_p=0.75, top_k=40, num_beams=4, max_new_tokens=128, min_new_tokens=1, repetition_penalty=2.0, )

def evaluate2(instruction, input=None): prompt = generate_prompt(instruction, input) inputs = tokenizer(prompt, return_tensors="pt") input_ids = inputs["input_ids"].cuda() generation_output = model.generate( input_ids=input_ids, generation_config=newgeneration_config, return_dict_in_generate=True, output_scores=True, max_new_tokens=256 ) for s in generation_output.sequences: output = tokenizer.decode(s) print("Response:", output.split("### Response:")[1].strip())

for instruction in [ "Tell me about alpacas.", "能给我讲一段笑话吗", "我想和女朋友在北京约会,能推荐几个约会的地方吗", "为给定的地点提供一些旅游建议。\n地点:上海", "为给定的地点提供一些旅游建议。\n地点:北京", ]: print("Instruction:", instruction) print("Response:", evaluate2(instruction)) print()

greedyint commented 1 year ago

13B是可以出结果的....难道7B的根本不行吗?

Facico commented 1 year ago

你有不修改我们generate程序试过吗,我们绝大部分测试都是基于7B的,7B的lora模型肯定是没有问题的

greedyint commented 1 year ago

可以啦~~~