Closed mberman84 closed 1 year ago
Hi @mberman84,
Unfortunately, we don't support macOS natively. Can you install our Docker and try to experiment from there? You can look up Docker docs on how to run terminal/Python/Jupyter from inside an existing container image
Downloading the entire model is not the correct behavior - instead, it should download ~3 shards with embeddings and LM head (the start and the end of the model), but skip downloading the intermediate transformer blocks (= most shards)
Hi @mberman84,
We've shipped native macOS support in #477 - both macOS clients and servers (including ones using Apple M1/M2 GPU) now work out of the box. You can try the latest version with:
pip install --upgrade git+https://github.com/bigscience-workshop/petals
Please ensure that you use Python 3.10+ (you can use Homebrew to install one: brew install python
).
Please let me know if you meet any issues while installing or using it!
Hi @borzunov,
I'm genuinely happy you've been working on native macOS support – this is a game-changer for me. Thank you so much, this is really appreciated.
I ran "pip install --upgrade git+https://github.com/bigscience-workshop/petals" within a Python 3.11.4 virtual environment. But when trying to run the script I got the same error as before the upgrade:
RuntimeError:
An attempt has been made to start a new process before the
current process has finished its bootstrapping phase.
This probably means that you are not using fork to start your
child processes and you have forgotten to use the proper idiom
in the main module:
if __name__ == '__main__':
freeze_support()
...
The "freeze_support()" line can be omitted if the program
is not going to be frozen to produce an executable.
I'm running this on an Intel MacBook.
Do you happen to have any idea what I'm missing?
Hi @Spider-netizen,
Please move your script's code into the if __name__ == '__main__':
condition as the error message suggests. This fixes the error in my case.
You need that because, unlike Linux, macOS creates subprocesses via spawn (basically runs this script again with another __name__
), so the __name__ == "__main__"
check is needed in most scripts using multiprocessing.
When I have time, I'll check if it's possible to update the library to make it work without this condition, so it's not confusing for macOS users.
@Spider-netizen,
Just fixed it, now it should work even without if __name__ == "__main__":
once you update:
pip install --upgrade git+https://github.com/learning-at-home/hivemind
pip install --upgrade git+https://github.com/bigscience-workshop/petals
I'm using MacOS and got everything installed. Now I'm trying to run the code recommended in the docs:
However, when I try this, I get this error:
When I try to use the new LLama2 model (I have access) it starts to download the entire model, which is obviously huge (15x9gb). Is that the correct behavior?