Closed ianrahman closed 1 year ago
Seriously, im even getting stuck post installation at "AttributeError: module 'numpy' has no attribute 'bool'. Did you mean: 'bool_'?" so i know the author has to have something not present in the generalized guide found here: https://www.chrisjmendez.com/2022/12/02/launch-mac-on-aws-ec2/
What ive got so far:
## check for presence of brew and if not install, but not doing since most of yall have homebrew on mac
# if not on m1 dont use 'arch -arm64'
arch -arm64 brew install anaconda
conda create -n coreml_stable_diffusion python=3.8 -y
conda activate coreml_stable_diffusion
arch -arm64 brew reinstall rustup-init
rustup toolchain install nightly
rustup default nightly
pip3 install numpy==1.23.1 setuptools scipy coremltools transformers torch==1.12.1
Im popping offline, but the above will get you to "OSError: Can't load config for 'stabilityai/stable-diffusion-2-base'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'stabilityai/stable-diffusion-2-base' is the correct path to a directory containing a model_index.json file" If anyone can add that fix to the install script it should get people the rest of the way.
I got the same error as this after pip3 install numpy==1.23
to get around the bool_
error.
Scrolling up a little, part of this error message is:
OSError: Token is required (`token=True`), but no token found. You need to provide a token or be logged in to Hugging Face with `huggingface-cli login` or `huggingface_hub.login`. See https://huggingface.co/settings/tokens.
After signing up and running huggingface-cli login
, with a token from https://huggingface.co/settings/tokens, you should be set. 🎉
Are you referring to install instructions for Apple’s stable diffusion library? Please report issues installing that software to the upstream: https://github.com/apple/ml-stable-diffusion#-converting-models-to-core-ml
If you have a deterministic pathway to set up the developer environment, please open a PR here as well as a PR against upstream repo to improve both READMEs
It seems like we could avoid needing Python at all by downloading and using pre-build model files from these instructions: https://github.com/apple/ml-stable-diffusion#-using-ready-made-core-ml-models-from-hugging-face-hub
I’ll look into adding a script to clone the models directly from hugging-face the next time I work on this project.
I'm trying to use the pre-converted models using these steps... however I have nearly-zero Xcode knowledge.
git clone https://github.com/justjake/Gauss.git; cd Gauss
git clone https://github.com/justjake/ml-stable-diffusion.git
mkdir compiled-models; cd compiled-models
git lfs install
or maybe brew install git-lfs
-- I already had it on my systemgit clone https://huggingface.co/apple/coreml-stable-diffusion-v1-4
git clone https://huggingface.co/apple/coreml-stable-diffusion-v1-5
git clone https://huggingface.co/apple/coreml-stable-diffusion-2-base
original/packages
folder within each of the clones above. (Or should it be original/compiled
?)I found this repo: https://github.com/huggingface/swift-coreml-diffusers
This I could clone and build with this tip. Perhaps it would be easiest to fork that repo and add additional features to the UI.
Q: what is stopping this project from being shipped pre-trained and pre-compiled as a zip or dmg ?
I think i worked through almost all the steps. however the last two mv commands seem to fail in the build-models.sh script
+build:12> mv ../compiled-models/sd2-base/Unet.mlmodelc ../compiled-models/sd2-base mv: rename ../compiled-models/sd2-base/Unet.mlmodelc to ../compiled-models/sd2-base/Unet.mlmodelc: No such file or directory
and i can't seem to see what previous line might have failed.....
Q: what is stopping this project from being shipped pre-trained and pre-compiled as a zip or dmg ?
A single model is larger than GitHub Release allowed file size. I wanted to support multiple models as well; in total all 3 models are 10gb-ish. My thought was to release models as zips split into 2gb file chunks, and have the app download and assemble them in the background.
I’m in the process of setting up my Apple Developer account, once that’s done I’ll look into publishing on the App Store as a all-in-one package.
If you’re here as an end-user looking to quickly run some stuff on your Mac today - this project is not ready for you yet unfortunately. I’m happy to onboard developers to contribute to the project to make improvements, but there’s no published release yet for end users because it’s not ready
What ive got so far:
## check for presence of brew and if not install, but not doing since most of yall have homebrew on mac # if not on m1 dont use 'arch -arm64' arch -arm64 brew install anaconda conda create -n coreml_stable_diffusion python=3.8 -y conda activate coreml_stable_diffusion arch -arm64 brew reinstall rustup-init rustup toolchain install nightly rustup default nightly pip3 install numpy==1.23.1 setuptools scipy coremltools transformers torch==1.12.1 huggingface-hub
is enough to make this work, followed by
echo "go sign up for an api key at https://huggingface.co/settings/tokens"
# and then
huggingface-cli login
./build-models.sh
There seem to be a few gaps in the instructions, such as an assumption that the correct Python version is installed, or that Anaconda / Miniconda is installed at all.
Instructions should be updated to reflect setup steps starting from a fresh install of macOS, without assumptions as to what software a user has installed.
Alternatively, installation requirements could be handled via the build script.