Closed lwinhong closed 2 months ago
你好,我以如下方式运行了代码,可以正常fim,
from transformers import AutoTokenizer, AutoModelForCausalLM
# device = "cuda" # the device to load the model onto
device = "cpu" # the device to load the model onto
import torch
TOKENIZER = AutoTokenizer.from_pretrained("Qwen/CodeQwen1.5-7B")
MODEL = AutoModelForCausalLM.from_pretrained("Qwen/CodeQwen1.5-7B", device_map="auto").eval()
# 输入文本
input_text = """<fim_prefix>def quicksort(arr):
if len(arr) <= 1:
return arr
<fim_suffix>
left = [x for x in arr if x < pivot]
middle = [x for x in arr if x == pivot]
right = [x for x in arr if x > pivot]
return quicksort(left) + middle + quicksort(right)<fim_middle>"""
model_inputs = TOKENIZER([input_text], return_tensors="pt").to(device)
# Use `max_new_tokens` to control the maximum output length.
generated_ids = MODEL.generate(model_inputs.input_ids, max_new_tokens=512, do_sample=False)[0]
# The generated_ids include prompt_ids, so we only need to decode the tokens after prompt_ids.
output_text = TOKENIZER.decode(generated_ids[len(model_inputs.input_ids[0]):], skip_special_tokens=True)
print(f"Prompt: {input_text}\n\nGenerated text: {output_text}")
output:
Prompt: <fim_prefix>def quicksort(arr):
if len(arr) <= 1:
return arr
<fim_suffix>
left = [x for x in arr if x < pivot]
middle = [x for x in arr if x == pivot]
right = [x for x in arr if x > pivot]
return quicksort(left) + middle + quicksort(right)<fim_middle>
Generated text:
pivot = arr[len(arr) // 2]
没有发生您遇到的现象,麻烦运行上述脚本,看看是否有差异?
你好,我以如下方式运行了代码,可以正常fim,
from transformers import AutoTokenizer, AutoModelForCausalLM # device = "cuda" # the device to load the model onto device = "cpu" # the device to load the model onto import torch TOKENIZER = AutoTokenizer.from_pretrained("Qwen/CodeQwen1.5-7B") MODEL = AutoModelForCausalLM.from_pretrained("Qwen/CodeQwen1.5-7B", device_map="auto").eval() # 输入文本 input_text = """<fim_prefix>def quicksort(arr): if len(arr) <= 1: return arr <fim_suffix> left = [x for x in arr if x < pivot] middle = [x for x in arr if x == pivot] right = [x for x in arr if x > pivot] return quicksort(left) + middle + quicksort(right)<fim_middle>""" model_inputs = TOKENIZER([input_text], return_tensors="pt").to(device) # Use `max_new_tokens` to control the maximum output length. generated_ids = MODEL.generate(model_inputs.input_ids, max_new_tokens=512, do_sample=False)[0] # The generated_ids include prompt_ids, so we only need to decode the tokens after prompt_ids. output_text = TOKENIZER.decode(generated_ids[len(model_inputs.input_ids[0]):], skip_special_tokens=True) print(f"Prompt: {input_text}\n\nGenerated text: {output_text}")
output:
Prompt: <fim_prefix>def quicksort(arr): if len(arr) <= 1: return arr <fim_suffix> left = [x for x in arr if x < pivot] middle = [x for x in arr if x == pivot] right = [x for x in arr if x > pivot] return quicksort(left) + middle + quicksort(right)<fim_middle> Generated text: pivot = arr[len(arr) // 2]
没有发生您遇到的现象,麻烦运行上述脚本,看看是否有差异?
我用的是:CodeQwen1.5-7B-Chat
替换使用CodeQwen1.5-7B-Chat测试这个例子也是和上面的例子结果一样。麻烦您检查一下?
@lwinhong 不建议用 chat 模型进行补全,目前 chat 模型通常用来问答,base 模型通常用于续写/补全;
替换使用CodeQwen1.5-7B-Chat测试这个例子也是和上面的例子结果一样。麻烦您检查一下?
我下载base模型,暂时就没这个问题了
@lwinhong 不建议用 chat 模型进行补全,目前 chat 模型通常用来问答,base 模型通常用于续写/补全;
有没有可以同时完成这个两个功能模型,我下载base模型,确实就好了
@lwinhong 不建议用 chat 模型进行补全,目前 chat 模型通常用来问答,base 模型通常用于续写/补全;
有没有可以同时完成这个两个功能模型,我下载base模型,确实就好了
chat模型拥有部分补全能力,但是效果会受到影响。
@lwinhong 不建议用 chat 模型进行补全,目前 chat 模型通常用来问答,base 模型通常用于续写/补全;
有没有可以同时完成这个两个功能模型,我下载base模型,确实就好了
chat模型拥有部分补全能力,但是效果会受到影响。
明白了。谢谢
1.demo中代码直接运行ok 2.我把demo中的 替换它上面的 pivot = arr[len(arr) // 2] , 结果就很糟糕
图一:demo
图二:修改之后