pytorch-labs / gpt-fast

Simple and efficient pytorch-native transformer text generation in <1000 LOC of python.
BSD 3-Clause "New" or "Revised" License
5.58k stars 508 forks source link

Apple Silicon support? #1

Open caseybasichis opened 10 months ago

caseybasichis commented 10 months ago

Any plans to support Apple chips?

msaroufim commented 10 months ago

The code works on M1 with a few simple changes from cuda calls to mps calls but the issue is there is no inductor support for MPS yet so the benefit from torch.compile are not as pronounced

mikekml commented 4 months ago

any updates on MPS / Mac Silicone support ?

yukiarimo commented 3 months ago

Anything new?