OpenBMB / Eurus

Apache License 2.0
288 stars 14 forks source link

A800 single GPU:CUDA out of memory occurs when reasoning with Eurus-RM-7b #4

Closed MissQueen closed 7 months ago

MissQueen commented 7 months ago

My code looks like this:

def test(tokenizer, model, instruction, ans):

sentence = '[INST] ' + instruction + ' [\INST] ' + ans
inputs = tokenizer(sentence, return_tensors="pt")
input_ids = inputs.input_ids.cuda()
attention_mask = inputs.attention_mask.cuda()
reward = model(input_ids=input_ids, attention_mask=attention_mask).item()

return reward

tokenizer = AutoTokenizer.from_pretrained(model_path) model = AutoModel.from_pretrained(model_path, trust_remote_code=True).cuda() for line in tqdm(open(file, 'r', encoding='utf-8').readlines()): js = json.loads(line.strip()) instruction = js['instruction'] outputs = js['outputs'] rewards = [] for output in outputs: rewards.append(test(tokenizer, model, instruction, output)) w.write(json.dumps({ 'instruction':instruction, 'input':'', 'output': outputs[np.argmax(rewards)], }, ensure_ascii=False) + '\n') w.flush()

I try to find the one with the highest score from multiple answers, but it always has CUDA out of memory Error. is there something wrong with the code?

lifan-yuan commented 7 months ago

Hi,

Your code seems correct. Maybe you can try torch.no_grad()?