lucidrains / big-sleep

A simple command line tool for text to image generation, using OpenAI's CLIP and a BigGAN. Technique was originally created by https://twitter.com/advadnoun
MIT License
2.57k stars 304 forks source link

M1 "cuda" Support ? #131

Open rrigoni opened 2 years ago

rrigoni commented 2 years ago

Hi all,

I'm new and this might be a newbie question... Is there a way to emulate cuda on mac m1 gpu's? Or use big-sleep at a CPU level?

I have Tensorflow installed and running in my m1 mac.

jnyheim commented 2 years ago

Same issue here.

I have yet been able to find a solution by my self, hoping this soon will be resolved.

WASasquatch commented 2 years ago

Yeah, it's weird this is CUDA only, but over at deep-daze, it can be NVIDIA or AMD.

rrigoni commented 2 years ago

Pytorch now is compatible with M1 (see https://pytorch.org/blog/introducing-accelerated-pytorch-training-on-mac/). Are there any plans on big-sleep support it as well ot it will be only cuda?

htoyryla commented 2 years ago

M1 support is definitely not ready yet. I tried it, and could run some performance tests, but cuda ("mps") did not work correctly especially in the backward direction.

WASasquatch commented 2 years ago

Does M1 even have Cuda? I don't see them listed on any specification. I thouthg they had their own thing, called ALU Cores, not CUDA Cores.

On Mon, May 23, 2022 at 12:48 AM Hannu Töyrylä @.***> wrote:

M1 support is definitely not ready yet. I tried it, and could run some performance tests, but cuda ("mps") did not work correctly especially in the backward direction.

— Reply to this email directly, view it on GitHub https://github.com/lucidrains/big-sleep/issues/131#issuecomment-1134305559, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAIZEZJ4Q6RWX4YGVHJZQJ3VLMZ3JANCNFSM5TA4O7IQ . You are receiving this because you commented.Message ID: @.***>

-- Sincerely, Jordan S. C. Thompson

htoyryla commented 2 years ago

Does M1 even have Cuda? I don't see them listed on any specification. I thouthg they had their own thing, called ALU Cores, not CUDA Cores.

Pytorch calls it mps, uses "metal shaders", see the link in rrgoni's comment.

WASasquatch commented 2 years ago

Yeah, I am aware of MPS. I just mean it isn't CUDA support. Apple GPUs use ALU cores. Pytorch seems to be using wrapper support via MPS, which as Pytorch states is specific to each Apple GPU, which may be the issues people are encountering, the MPS support wasn't done for that GPU.

On Mon, May 23, 2022, 9:44 AM Hannu Töyrylä @.***> wrote:

Does M1 even have Cuda? I don't see them listed on any specification. I thouthg they had their own thing, called ALU Cores, not CUDA Cores.

Pytorch calls it mps, uses "metal shaders", see the link in rrgoni's comment.

— Reply to this email directly, view it on GitHub https://github.com/lucidrains/big-sleep/issues/131#issuecomment-1134903648, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAIZEZINBGR5MDLJB3CLDXLVLOYVPANCNFSM5TA4O7IQ . You are receiving this because you commented.Message ID: @.***>

htoyryla commented 2 years ago

OK. I'd be surprised though if people at pytorch would not know what they are doing, implementing accelerated tensor operations using mps.

Seen from an applications developer's point of view, what is available a the moment is not even alpha. Could not get a simple loss.backwards to work correctly. It runs, but does not converge like it does on cpu. Quite soon one also gets into "not implemented area". And finally, the speed improvement over cpu was not much.

PS. Now I got what you said... that there are several incompatible M1 GPU implementations?

WASasquatch commented 2 years ago

At the same time, we have multi-million dollar software developers that can't properly support modern CPUs in general because of their limited scope in ability to test. I mean, look at the game industry and computer support. Even if you follow spec, unless you can test it on that hardware, there may be literally game-breaking or software breaking bugs.

Edit: A great example of this is Mortal Kombat X. It supported like a quarter of modern CPUs at launch. In fact, to this day, I still can't launch the game, because it's also not forward compatible with most new CPUs like Ryzens. All three PCs I've had since it's launch were not compatible. Unlike my friends on generic high profile Intels, I had AMD APU, AMD CPU, and Dual Xeons (server processors). Now I have a Rzyen 5, and still can't launch it. I'm sure if I had a FX chipset, for AMD, I bet it'd work.

On Tue, May 24, 2022 at 9:28 AM Hannu Töyrylä @.***> wrote:

OK. I'd be surprised though if people at pytorch would not know what they are doing, implementing accelerated tensor operations using mps.

Seen from an applications developer's point of view, what is available a the moment is not even alpha. Could not get a simple loss.backwards to work correctly. It runs, but does not converge like it does on cpu. Quite soon one also gets into "not implemented area". And finally, the speed improvement over cpu was not much.

— Reply to this email directly, view it on GitHub https://github.com/lucidrains/big-sleep/issues/131#issuecomment-1136142323, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAIZEZJGOECDLPIULNL76PLVLT7UTANCNFSM5TA4O7IQ . You are receiving this because you commented.Message ID: @.***>

-- Sincerely, Jordan S. C. Thompson

htoyryla commented 2 years ago

I'll get my coat. I was hoping I would be able to get a M1 Studio to replace one of my linux boxes, but maybe not worth while expecting much.

Edit: Anyhow... I only wanted to comment that the M1 support is by no means ready. Without knowing the details, already a quick look at their issue tracker appeared to me to show that.