Open mantissaman opened 3 years ago
Yeah sorry we don't have M1 builds yet. The easiest thing to do is use x64 python via rosetta for development. We hope to have M1 builds available soon.
@mantissaman I did manage to make it work locally with Python 3.9 on M1 chip, with the following steps.
brew install rust
- needed for compilepython -m venv <path-to-your-env>
source <path-to-your-env>/bin/activate
pip install wheel
git clone https://github.com/osohq/oso.git
cd oso && make python-build
The last step builds OSO locally and installs it in the active python environment by creating <path-to-your-env>/site-packages/oso.egg-link
.
After that, I was able to import and use OSO. Note that I needed OSO only, not flask or sqlalchemy libs, but you should be able to install those manually once you have OSO core installed.
oso āÆ python
Python 3.9.2 (default, Mar 5 2021, 18:57:15)
[Clang 12.0.0 (clang-1200.0.32.29)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import platform
>>> platform.processor()
'arm'
>>> import oso
>>> o = oso.Oso()
>>> o.load_str('allow("foo", "bar", "baz");')
>>> o.is_allowed("foo", "bar", "baz")
True
>>>
I hope this helps.
@delicb - Many thanks. It worked like a charm.
Unable to develop with Go; either the linker fails or if I try forcing the architecture I see...
> env GOOS=darwin GOARCH=amd64 go build .
package oso-test
imports github.com/osohq/go-oso
imports github.com/osohq/go-oso/internal/ffi: build constraints exclude all Go files in $GOPATH/pkg/mod/github.com/osohq/go-oso@v0.14.0/internal/ffi
Hey @rcrowe. We aren't set up to build arm64 yet but you can build it yourself with a couple extra steps.
1) Clone the oso repo and build the library for arm64. cargo build --release
2) Clone the go-oso repo and replace the library in internal/ffi/native/macos/libpolar.a
With the one you just built (found in target/release/libpolar.a
)
3) Replace the line in the go-oso library in internal/ffi/ffi.go
that says
// #cgo darwin,amd64 LDFLAGS: ${SRCDIR}/native/macos/libpolar.a -ldl -lm
With
// #cgo darwin,arm64 LDFLAGS: ${SRCDIR}/native/macos/libpolar.a -ldl -lm
Thanks @saolsen that worked locally šš»
What would need to happen in order to support this within the various languages?
macos_libs
run rustup target add x86_64-apple-darwin
& cargo build --target aarch64-apple-darwin --release
libpolar.a
with the target in the filename, into each language We are planning to include builds for arm in the official release, we just haven't gotten to it yet. The way you would have to build it yourself is a little bit different for every language.
Leaving a comment to indicate demand on this feature. Currently we have to support two separate paths to deploy a project :)
We might be able to take some inspiration from https://github.com/rust-lang/rust/pull/75991.
Is there a planned release date for a version with ARM wheels?
We're planning to include a macOS ARM wheel for our python library in our release next week. Other languages will be a bit later but hopefully not too much longer.
Awesome! Thanks. Will that include Linux ARM as well? Iām using the Python library inside docker on an M1.
A linux ARM would be great- basically something that can run in the official Python ARM64 containers.
For anyone else who needs it I'm building docker containers with ARM support here.
And in further good news, the automation I set up to keep the containers up to date picked up the new release and automatically built new containers.
An update here! oso
in Python now supports M1 processors. We don't yet have support for ARM linux. Support for M1 in other languages & ARM linux is coming! Our next priority is Go!
Just a note on this: The lack of linux,arm64
support (in go) means I can't deploy (using CDK) to arm64 lambda from my m1 mbp.build.
So, for others - if you see the error below, that's why:
imports github.com/osohq/go-oso/internal/ffi: build constraints exclude all Go files in /Users/home/go/src/github.com/org/project/vendor/github.com/osohq/go-oso/internal/ffi
@jlk- AWS Lambda supports using images instead of zip files to launch. As a work around you can use the multi-py oso image as the base for your lambda.
The Java version of oso also seems to have an M1 related issue.
When running with a JRA built for aarch64 I'm getting the following error:
Caused by: java.lang.UnsatisfiedLinkError: could not get native definition for type `POINTER`, original error message follows: java.lang.UnsatisfiedLinkError: Unable to execute or load jffi binary stub from `/var/folders/2d/lkf35y614bx09d6cdpvz5pt40000gn/T/`. Set `TMPDIR` or Java property `java.io.tmpdir` to a read/write path that is not mounted "noexec".
/Users/fld33/fld33/field33/.ijwb/jffi12706078071003416507.dylib: dlopen(/Users/fld33/fld33/field33/.ijwb/jffi12706078071003416507.dylib, 0x0001): tried: '/Users/fld33/fld33/field33/.ijwb/jffi12706078071003416507.dylib' (fat file, but missing compatible architecture (have 'i386,x86_64', need 'arm64e')), '/usr/lib/jffi12706078071003416507.dylib' (no such file)
at com.kenai.jffi.internal.StubLoader.tempLoadError(StubLoader.java:424)
at com.kenai.jffi.internal.StubLoader.loadFromJar(StubLoader.java:409)
at com.kenai.jffi.internal.StubLoader.load(StubLoader.java:278)
at com.kenai.jffi.internal.StubLoader.<clinit>(StubLoader.java:487)
at java.base/java.lang.Class.forName0(Native Method)
at java.base/java.lang.Class.forName(Class.java:398)
at com.kenai.jffi.Init.load(Init.java:68)
at com.kenai.jffi.Foreign$InstanceHolder.getInstanceHolder(Foreign.java:49)
at com.kenai.jffi.Foreign$InstanceHolder.<clinit>(Foreign.java:45)
at com.kenai.jffi.Foreign.getInstance(Foreign.java:103)
at com.kenai.jffi.Type$Builtin.lookupTypeInfo(Type.java:242)
at com.kenai.jffi.Type$Builtin.getTypeInfo(Type.java:237)
at com.kenai.jffi.Type.resolveSize(Type.java:155)
at com.kenai.jffi.Type.size(Type.java:138)
at jnr.ffi.provider.jffi.NativeRuntime$TypeDelegate.size(NativeRuntime.java:178)
at jnr.ffi.provider.AbstractRuntime.<init>(AbstractRuntime.java:48)
at jnr.ffi.provider.jffi.NativeRuntime.<init>(NativeRuntime.java:57)
at jnr.ffi.provider.jffi.NativeRuntime.<init>(NativeRuntime.java:41)
at jnr.ffi.provider.jffi.NativeRuntime$SingletonHolder.<clinit>(NativeRuntime.java:53)
at jnr.ffi.provider.jffi.NativeRuntime.getInstance(NativeRuntime.java:49)
at jnr.ffi.provider.jffi.Provider.<init>(Provider.java:29)
at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
at java.base/java.lang.Class.newInstance(Class.java:584)
at jnr.ffi.provider.FFIProvider$SystemProviderSingletonHolder.getInstance(FFIProvider.java:68)
at jnr.ffi.provider.FFIProvider$SystemProviderSingletonHolder.<clinit>(FFIProvider.java:57)
at jnr.ffi.provider.FFIProvider.getSystemProvider(FFIProvider.java:35)
at jnr.ffi.LibraryLoader.create(LibraryLoader.java:74)
at com.osohq.oso.Ffi.<init>(Ffi.java:314)
at com.osohq.oso.Ffi.get(Ffi.java:322)
at com.osohq.oso.Polar.<init>(Polar.java:24)
at com.osohq.oso.Oso.<init>(Oso.java:11)
Running the same built .jar with a non-aarch64 JRE works, though with a big performance penalty.
Apart from that, when looking into the issue I also stumbled upon https://github.com/jnr/jnr-ffi/issues/257, which might be a hurdle for M1 support (but should by now be easily fixable by updating the dependency).
EDIT: Can confirm that bumping the jnr dependency fixes the problem
@saolsen we've built linux/arm64
for go. Any way we can add it to https://github.com/osohq/go-oso?
Hey @chrichts, do you mean you were able to do a manual local build for Go or that you've set up a GitHub Actions workflow to build it (the same way the rest of our libs are built)?
We're doing a manual local build. Its fine and it works, but everytime we update oso we need to rerun it so would be much more helpful having it included.
But I see now it looks like you're building it here: https://github.com/osohq/oso/blob/main/.github/workflows/release.yml. Can I open a PR to add support for linux/arm64
?
I manage to fix our local build using this:
Dockerfile:
# Use manylinux_2_28 image
FROM quay.io/pypa/manylinux_2_28_aarch64 as build
RUN yum -y install gcc make patch zlib-devel bzip2 bzip2-devel readline-devel sqlite sqlite-devel openssl-devel tk-devel libffi-devel xz-devel jq
RUN curl https://pyenv.run | bash
ENV PATH="/root/.pyenv/bin:/root/.pyenv/shims:${PATH}"
RUN pyenv install 3.12
RUN pyenv global 3.12
RUN python --version
RUN pip install --upgrade pip setuptools wheel
RUN curl https://sh.rustup.rs -sSf | sh -s -- -y
ENV PATH="/root/.cargo/bin:${PATH}"
ARG OSO_VERSION
WORKDIR /opt
RUN git clone --depth 1 --branch "v$OSO_VERSION" https://github.com/osohq/oso.git /opt/oso
WORKDIR /opt/oso
RUN rm -Rf .git
RUN make python-build
RUN make python-test
WORKDIR /opt/oso/languages/python/oso
RUN python setup.py bdist_wheel
RUN auditwheel repair dist/*.whl -w wheelhouse/
WORKDIR /opt/oso
RUN make python-django-build
RUN make python-django-test
WORKDIR /opt/oso/languages/python/django-oso
RUN python setup.py bdist_wheel
# RUN auditwheel repair dist/*.whl -w wheelhouse/
FROM scratch as runtime
ARG OSO_VERSION
COPY --from=build /opt/oso/languages/python/oso/wheelhouse/oso-"$OSO_VERSION"-cp312-cp312-manylinux_2_28_aarch64.whl /
COPY --from=build /opt/oso/languages/python/django-oso/dist/django_oso-0.27.1-py3-none-any.whl /
Run Command:
DOCKER_BUILDKIT=1 docker build \
--output . \
--build-arg="OSO_VERSION=0.27.3" \
--target runtime \
.
While trying to run OSO Python sample app or trying to install oso (pip install oso) on Apple M1 architecture I am getting following error: (venv) ā oso-python-quickstart git:(main) pip install -r requirements.txt ERROR: Could not find a version that satisfies the requirement oso ERROR: No matching distribution found for oso