ostris / ai-toolkit

Various AI scripts. Mostly Stable Diffusion stuff.
MIT License
3.19k stars 324 forks source link

Does not work with m1 mac #127

Open ghost opened 2 months ago

ghost commented 2 months ago

Yup it do not work Add support asap

KodeurKubik commented 2 months ago

Have you tried setting the device to "mps"? I'm trying this rn but ran out of RAM so I'm gonna try again later on a more powerful Mac

martintomov commented 2 months ago

hi @KodeurKubik

Have you tried setting the device to "mps"?

Setting the device to MPS isn't straightforward because the codebase's native implementation heavily relies on CUDA instances. To use MPS, you'd need to implement a get_device function, something like:

def get_device():
    if torch.backends.mps.is_available():
        return torch.device("mps")
    elif torch.cuda.is_available():
        return torch.device("cuda")
    else:
        return torch.device("cpu")

Then, you would need to edit all .py files in the jobs/ directory to use this new get_device function. You'll also need to modify the device logic in stable_diffusion_model.py, and possibly in (many) other files as well.

Additionally, the .yaml config needs to be updated from cuda:0 to mps.

I'm trying this rn but ran out of RAM

I'm experimenting with this in my fork of the repo. I have an M1 Max MacBook Pro with 64GB of RAM, which I believe should be enough to run it.

What have you tried so far, and how much RAM do you have on your machine? Maybe we can collaborate on this? https://github.com/martintomov/ai-toolkit/tree/mps-support

KodeurKubik commented 2 months ago

I just edited the value in the yaml config from cuda:0 to mps and ran out of ram on my 16GB MBP. I did not take a look into the code yet but it really seems like you are right (and I believe so).

I will try later this week (probably Monday or later as I am not available before) on a 32GB Mac. I did not really take the time yet to try editing the code but I am willing to help you on the fork.

Can we communicate through another platform like Discord, Telegram or other? GitHub issues is not the best way for me…

KodeurKubik commented 2 months ago

Message sent!

ghost commented 2 months ago

现在他懒惰,没有做任何工作。他当前在 2分钟名声顶点上。他的名声会消失,希望他会回到现实生活中。希望他工作后才能那样。

bghira commented 2 months ago

it never will work with m1 mac, doesn't have bf16 support and doesn't have autocast support or autograd

you need to be less rude to project maintainers, Ostris doesn't deserve that

ghost commented 2 months ago

@bghira 他住在美国,我住在中国

ghost commented 2 months ago

@bghira 你是中国人吗

bghira commented 2 months ago

sorry, no one can understand you.

ghost commented 2 months ago

@bghira you are racist toward china people. all you discriminate. i first thought you are also a Chinese man. but no

ghost commented 2 months ago

also do not say "no one", most of the people in AI top ones are chinese! look at all our research papers also

bghira commented 2 months ago

nothing to do with race. it is that you are using Chinese on an English project space. thus, no one can understand you. obviously you know what i meant. you are still rude - John Cena could come here and speak Chinese and receive same response. and it doesn't change how rude you are.

bghira commented 2 months ago

wild, they deleted their whole Github.

KodeurKubik commented 2 months ago

can we stay on-topic please?

bghira commented 2 months ago

when you tell people to shush because you personally don't like the noise, it amplifies the noise, and does the opposite. as now i am here explaining to you that asking us to stay on topic emitted yet another email to everyone watching the project. instead. unsubscribe from the thread.

OmarMuhtaseb commented 1 month ago

Guys, what's the progress here?

I wanna try to run it on my M3, but from what I see it won't work as well, right?

@martintomov @KodeurKubik were you able guys to get it work?

KodeurKubik commented 1 month ago

There’s some functions that are not compatible yet with PyTorch on MPS - so we will still be waiting for those before continuing

ahmetkca commented 1 month ago

It would really be awesome if we could locally fine-tune FLUX.1-dev on mac. I have m3 max with 128gb unified memory and would really appreciate the effort.

sanctimon commented 1 month ago

Same here, M3 Max with 96GB RAM. What would be even awesomer, is if there were a way for it to make use of the neural cores!