Closed TonyXuQAQ closed 1 year ago
Yes,you need to set the device of model to 'auto' like this:
model = ValleyLlamaForCausalLM.from_pretrained(model_name, torch_dtype=torch.float16, device_map='auto')
Thanks for the prompt reply!
Yes,you need to set the device of model to 'auto' like this:
model = ValleyLlamaForCausalLM.from_pretrained(model_name, torch_dtype=torch.float16, device_map='auto')
hello,By setting the device of model to 'auto' , the following error occurred. How to solve it? Thanks in advance Expected all tensors to be on the same device, but found at least two devices, cuda:1 and cuda:0!
As you mentioned before, I can train Valley-13b by 16 V100 gpus with deepspeed zero3. I wonder whether I can infer with Valley-13b by V100. Does valley support multi-GPU inference?