meta-llama / llama

Inference code for Llama models
Other
55.3k stars 9.42k forks source link

Can't run without a GPU. #499

Open Uri-lab-beep opened 1 year ago

Uri-lab-beep commented 1 year ago

When I try to run 7B-chat without a GPU its says this: RuntimeError: ProcessGroupNCCL is only supported with GPUs, no GPUs found!

MDFARHYN commented 1 year ago

same error I got even I have gpu installed

raghu-007 commented 1 year ago

I have AMD and Nvidia GPU installed in my systems, but getting the same error message no gpu found, it has something to do with hardware configuration I think(error due to that).

the system is not properly set to utilize the GPUs for computation

raghu-007 commented 1 year ago

same error I got even I have gpu installed

Yup same with me!!!

Uri-lab-beep commented 1 year ago

I don't have a GPU, it would be nice if someone made it work without one.

raghu-007 commented 1 year ago

@Uri-lab-beep https://github.com/facebookresearch/llama/issues/534#issuecomment-1651094079 can try this!!!

raghu-007 commented 1 year ago

https://aws.amazon.com/blogs/machine-learning/llama-2-foundation-models-from-meta-are-now-available-in-amazon-sagemaker-jumpstart/ Plz do check it!!!

WuhanMonkey commented 12 months ago

Running LLM on CPUs will be really slow and also requires a lot of RAM. If you don't have a local GPU available, please use cloud solutions such as Azure model catalog, AWS Sagemaker, GCP

raghu-007 commented 11 months ago

AWS Sagemaker Runs well for me!!!

Running LLM on CPUs will be really slow and also requires a lot of RAM. If you don't have a local GPU available, please use cloud solutions such as Azure model catalog, AWS Sagemaker, GCP