EricFillion / happy-transformer

Happy Transformer makes it easy to fine-tune and perform inference with NLP Transformer models.
http://happytransformer.com
Apache License 2.0
517 stars 66 forks source link

M1 GPU accell support #294

Closed ThatStella7922 closed 1 year ago

ThatStella7922 commented 2 years ago

this is available in PyTorch now and you can check via torch.backends.mps.is_available()

https://pytorch.org/docs/stable/notes/mps.html

you just use torch.device("mps") instead of torch.device("cuda") on an Nvidia GPU.

EricFillion commented 2 years ago

Thanks for the suggestion!

tinyfool commented 1 year ago

I try to solve it by add codes work round like that,

        if torch.has_mps:
            self._device = 'mps'
            self.model.to(self._device)       

But when I install it, run it found python tell me model is on mps, but input_ids still in the CPU, so I change it, and change self._pipeline = TextGenerationPipeline(model=self.model, tokenizer=self.tokenizer, device=device_number) to self._pipeline = TextGenerationPipeline(model=self.model, tokenizer=self.tokenizer, device='mps').

Then python tell me

NotImplementedError: The operator 'aten::cumsum.out' is not currently implemented for the MPS device. If you want this op to be added in priority during the prototype phase of this feature, please comment on https://github.com/pytorch/pytorch/issues/77764.

EricFillion commented 1 year ago

Now supported with version 3.0.0. MPS is automatically detected and used.