Closed antranttu closed 2 years ago
Hi @antranttu -- Not yet, but if you're willing to work with us and try a few experimental builds we could probably get this working. We don't have an m1 mac to test this out on, but we were able to build an m1 version on an Intel mac. If you're willing to try this out, then do the following and let us know how it goes:
pip uninstall interpret-core
pip install -U https://dev.azure.com/ms/_apis/resources/Containers/16386841/wheel?itemPath=wheel%2Finterpret_core-0.2.7-py3-none-any.whl
-InterpretML team
Hi,
Yes I'm willing to try the experimental builds to get support for M1 going. However, I wasn't able to install using the mentioned command because I do not have access and that the request requires authentication.
Ok then, let's try this. Go to:
Download the bottom item labeled "wheel"
Extract interpret_core-0.2.7-py3-none-any.whl from the wheel.zip onto your computer. Then, run:
pip uninstall interpret-core
pip install -U interpret_core-0.2.7-py3-none-any.whl
Sorry, that's the wrong link. Will post the right one in a sec
Here is the correct download. I will also update the message above.
It's working! Thank you very much for the help.
One last question: is the performance optimized compared to other platforms? Because I ran the example code on the adult income salary provided at https://interpret.ml/docs/ebm.html, it took like a second on my intel machine to execute but consistently ~3 seconds on the M1 mac, not sure if the computation would be linearly expensive when applied on a larger dataset.
Great! Wow, when does stuff like that work on the first try. :)
This build was a real hacky hack just to test things out, so I'll refine the solution to work on both m1 and Intel macs. Once we have that working, hopefully you can help us again by testing the finished working product. In the meantime, it should work fine for you if you aren't seeing crashes.
On the speed thing: I'd try it on bigger datasets before speculating on which chipset is faster. 1 sec vs 3 secs could easily be due to things like the time to load/initialize libraries. The build above is running on native ARM instructions and is not emulated.
-InterpretML
Thank you, I will test on bigger datasets to test its efficiency. But yes it's working great so far. Please do not hesitate to reach out if you need me for any testing in the future. Closing this for now.
Thanks again for the help.
Hi @antranttu -- The fully integrated m1 build is ready to be tested. If this works it will be in our next pypi release. Can you please try out the wheel here:
Hello, I got this error this time around:
dlopen(/Users/antran/miniforge3/envs/boosting/lib/python3.8/site-packages/interpret/glassbox/ebm/../../lib/lib_ebm_native_mac_arm.dylib, 0x0006):
tried: '/Users/antran/miniforge3/envs/boosting/lib/python3.8/site-
packages/interpret/glassbox/ebm/../../lib/lib_ebm_native_mac_arm.dylib' (no such file),
'/Users/antran/miniforge3/envs/boosting/lib/python3.8/site-packages/interpret/lib/lib_ebm_native_mac_arm.dylib'
(no such file)
Thanks @antranttu . I see the problem and will post a new build shortly.
Hi @antranttu -- This build should fix that issue:
Yup, working great this time! Do you mind me asking what the error was about?
Great to hear. Thanks for your help in testing this!
It was an issue in our build pipeline see https://github.com/interpretml/interpret/commit/dfd773e3150418c9d5e24b3ce5bb8e9d6d031a25 . There is a new arm specific shared library that python calls when it's running on an m1. That arm specific shared library was being built properly, but on the last step it wasn't being copied into the final wheel.
-InterpretML team
Hello @interpret-ml ! I am glad this is fixed ! Would it be possible to get this artefact ? I am running on m1 as well ;) Thanks
Hello @interpret-ml! I would like to try out EBM on M1 as well. I can not see anything under the links shared above. Can you please share a working version again with us? Thanks in advance.
Hello,
I am not sure if the artifacts for M1 support have been pushed to the official interpret
package yet, but I think I can share the working version from the previous discussion with @interpret-ml team. Please follow the instructions above to install them.
Hi @interpret-ml, any ETA for a official release that supports M1?
Hello @antranttu ,
thanks a lot for sharing the ZIP. Tried in my M1, but got the same error as you did before:
dlopen(/Users/antran/miniforge3/envs/boosting/lib/python3.8/site-packages/interpret/glassbox/ebm/../../lib/lib_ebm_native_mac_arm.dylib, 0x0006): tried: '/Users/antran/miniforge3/envs/boosting/lib/python3.8/site- packages/interpret/glassbox/ebm/../../lib/lib_ebm_native_mac_arm.dylib' (no such file), '/Users/antran/miniforge3/envs/boosting/lib/python3.8/site-packages/interpret/lib/lib_ebm_native_mac_arm.dylib' (no such file)
I tried with
Any hint how to identify, which version is installed?
Thanks Erwin
@interpret-ml : I would join the question of @markustoivonen, is there a ETA?
Hello @antranttu,
installed in a fresh venv, now it is working. Cool! Thanks for sharing the ZIP again. Would be good having it in the pypi! Thanks @interpret-ml
Erwin
Also leaving a comment here to express interest in having this more easily available for use in M1 Macs. @interpret-ml
Hello,
I was trying to installing and use EBM Classifier on my M1 computer but came across the following error:
I was wondering if
interpret
is supported on M1 chip yet? Is there any work-around for the error?Thank you!