Open QIANXUNZDL123 opened 7 months ago
from transformers import AutoModelForCausalLM, AutoTokenizer import os os.environ["CUDA_VISIBLE_DEVICES"] = '0,1,2,3'
tokenizer = AutoTokenizer.from_pretrained("./", trust_remote_code=True) model = AutoModelForCausalLM.from_pretrained("./", device_map="auto", trust_remote_code=True) inputs = tokenizer('登鹳雀楼->王之涣\n夜雨寄北->', return_tensors='pt')
pred = model.generate(**inputs, max_new_tokens=16,repetition_penalty=1.1) print(tokenizer.decode(pred.cpu()[0], skip_special_tokens=True))
The Python snippets:
Command lines:
Extra dependencies:
Steps to reproduce:
1. 2. 3.
No response
我找了好几处pad_token_id 查看都是0 但是就是报错,有没有大佬帮忙解决下
Required prerequisites
System information
from transformers import AutoModelForCausalLM, AutoTokenizer import os os.environ["CUDA_VISIBLE_DEVICES"] = '0,1,2,3'
tokenizer = AutoTokenizer.from_pretrained("./", trust_remote_code=True) model = AutoModelForCausalLM.from_pretrained("./", device_map="auto", trust_remote_code=True) inputs = tokenizer('登鹳雀楼->王之涣\n夜雨寄北->', return_tensors='pt')
inputs = inputs.to('cuda:2')
pred = model.generate(**inputs, max_new_tokens=16,repetition_penalty=1.1) print(tokenizer.decode(pred.cpu()[0], skip_special_tokens=True))
Problem description
Reproducible example code
The Python snippets:
Command lines:
Extra dependencies:
Steps to reproduce:
1. 2. 3.
Traceback
No response
Expected behavior
No response
Additional context
No response
Checklist