Closed anentropic closed 3 years ago
Following that:
File "/Users/anentropic/Library/Caches/pypoetry/virtualenvs/hexblade-gi2zk5D5-py3.9/lib/python3.9/site-packages/wgpu/backends/rs.py", line 108, in _get_wgpu_lib_path
raise RuntimeError(f"Could not find WGPU library in {embedded_path}")
RuntimeError: Could not find WGPU library in /Users/anentropic/Library/Caches/pypoetry/virtualenvs/hexblade-gi2zk5D5-py3.9/lib/python3.9/site-packages/wgpu/resources/libwgpu_native.dylib
Bypassing Poetry does not seem to help
% pip uninstall wgpu
Found existing installation: wgpu 0.3.0
Uninstalling wgpu-0.3.0:
Would remove:
/Users/anentropic/Library/Caches/pypoetry/virtualenvs/hexblade-gi2zk5D5-py3.9/LICENSE
/Users/anentropic/Library/Caches/pypoetry/virtualenvs/hexblade-gi2zk5D5-py3.9/lib/python3.9/site-packages/wgpu-0.3.0.dist-info/*
/Users/anentropic/Library/Caches/pypoetry/virtualenvs/hexblade-gi2zk5D5-py3.9/lib/python3.9/site-packages/wgpu/*
Proceed (y/n)? y
Successfully uninstalled wgpu-0.3.0
% pip install wgpu
Collecting wgpu
Downloading wgpu-0.3.0.tar.gz (65 kB)
|████████████████████████████████| 65 kB 2.0 MB/s
Requirement already satisfied: cffi>=1.10 in /Users/anentropic/Library/Caches/pypoetry/virtualenvs/hexblade-gi2zk5D5-py3.9/lib/python3.9/site-packages (from wgpu) (1.14.5)
Requirement already satisfied: pycparser in /Users/anentropic/Library/Caches/pypoetry/virtualenvs/hexblade-gi2zk5D5-py3.9/lib/python3.9/site-packages (from cffi>=1.10->wgpu) (2.20)
Building wheels for collected packages: wgpu
Building wheel for wgpu (setup.py) ... done
Created wheel for wgpu: filename=wgpu-0.3.0-py3-none-macosx_11_2_arm64.whl size=72787 sha256=ef24b3c659660431881b441788b1205d7b90eec89f534f95a2f04ff468a0c040
Stored in directory: /Users/anentropic/Library/Caches/pip/wheels/65/61/c8/553073b0633ba01220ede3798da3293ff8b054a5445ab2d218
Successfully built wgpu
Installing collected packages: wgpu
Successfully installed wgpu-0.3.0
% python experiments/pygfx_hexes.py
Traceback (most recent call last):
File "/Users/anentropic/Documents/Dev/Personal/python-hexblade/experiments/pygfx_hexes.py", line 6, in <module>
import wgpu.backends.rs # noqa: F401, Select Rust backend
File "/Users/anentropic/Library/Caches/pypoetry/virtualenvs/hexblade-gi2zk5D5-py3.9/lib/python3.9/site-packages/wgpu/backends/rs.py", line 119, in <module>
_lib = ffi.dlopen(_get_wgpu_lib_path())
File "/Users/anentropic/Library/Caches/pypoetry/virtualenvs/hexblade-gi2zk5D5-py3.9/lib/python3.9/site-packages/wgpu/backends/rs.py", line 108, in _get_wgpu_lib_path
raise RuntimeError(f"Could not find WGPU library in {embedded_path}")
RuntimeError: Could not find WGPU library in /Users/anentropic/Library/Caches/pypoetry/virtualenvs/hexblade-gi2zk5D5-py3.9/lib/python3.9/site-packages/wgpu/resources/libwgpu_native.dylib
% ls /Users/anentropic/Library/Caches/pypoetry/virtualenvs/hexblade-gi2zk5D5-py3.9/lib/python3.9/site-packages/wgpu/resources
__init__.py __pycache__ webgpu.idl wgpu.h
Looks like you installed the source distribution there which indeed does not contain prebuilt binaries:
Downloading wgpu-0.3.0.tar.gz (65 kB)
(not a .whl file)
I'll investigate this further later on. The python packaging peeps have been making breaking changes to pip and wheel lately, we may have to adjust our setup.py file.
Thanks for your report.
Perhaps it's because I'm on an M1 mac...
I thought I'd try installing more manually. The first step is python download-wgpu-native.py
Which downloads https://github.com/gfx-rs/wgpu-native/releases/download/v0.7.0/wgpu-macos-64-release.zip
But then I get:
no suitable image found. Did find:
/Users/anentropic/Library/Caches/pypoetry/virtualenvs/hexblade-gi2zk5D5-py3.9/lib/python3.9/site-packages/wgpu/resources/libwgpu_native.dylib: mach-o, but wrong architecture
/Users/anentropic/Library/Caches/pypoetry/virtualenvs/hexblade-gi2zk5D5-py3.9/lib/python3.9/site-packages/wgpu/resources/libwgpu_native.dylib: mach-o, but wrong architecture. Additionally, ctypes.util.find_library() did not manage to locate a library called '/Users/anentropic/Library/Caches/pypoetry/virtualenvs/hexblade-gi2zk5D5-py3.9/lib/python3.9/site-packages/wgpu/resources/libwgpu_native.dylib'
I then followed the instructions to build gfx-rs/wgpu-native
from source:
https://github.com/gfx-rs/wgpu/wiki/Getting-Started#getting-started
I copied the wgpu.h
and libwgpu_native.dylib
that I built into my virtualenv site-packages/wgpu/resources/
dir
But now I get:
Traceback (most recent call last):
File "/Users/anentropic/Documents/Dev/Personal/python-hexblade/experiments/pygfx_hexes.py", line 6, in <module>
import wgpu.backends.rs # noqa: F401, Select Rust backend
File "/Users/anentropic/Library/Caches/pypoetry/virtualenvs/hexblade-gi2zk5D5-py3.9/lib/python3.9/site-packages/wgpu/backends/rs.py", line 117, in <module>
ffi.cdef(_get_wgpu_h())
File "/Users/anentropic/Library/Caches/pypoetry/virtualenvs/hexblade-gi2zk5D5-py3.9/lib/python3.9/site-packages/cffi/api.py", line 112, in cdef
self._cdef(csource, override=override, packed=packed, pack=pack)
File "/Users/anentropic/Library/Caches/pypoetry/virtualenvs/hexblade-gi2zk5D5-py3.9/lib/python3.9/site-packages/cffi/api.py", line 126, in _cdef
self._parser.parse(csource, override=override, **options)
File "/Users/anentropic/Library/Caches/pypoetry/virtualenvs/hexblade-gi2zk5D5-py3.9/lib/python3.9/site-packages/cffi/cparser.py", line 389, in parse
self._internal_parse(csource)
File "/Users/anentropic/Library/Caches/pypoetry/virtualenvs/hexblade-gi2zk5D5-py3.9/lib/python3.9/site-packages/cffi/cparser.py", line 396, in _internal_parse
self._process_macros(macros)
File "/Users/anentropic/Library/Caches/pypoetry/virtualenvs/hexblade-gi2zk5D5-py3.9/lib/python3.9/site-packages/cffi/cparser.py", line 479, in _process_macros
raise CDefError(
cffi.CDefError: only supports one of the following syntax:
#define WGPUFeatures_DEPTH_CLAMPING ... (literally dot-dot-dot)
#define WGPUFeatures_DEPTH_CLAMPING NUMBER (with NUMBER an integer constant, decimal/hex/octal)
got:
#define WGPUFeatures_DEPTH_CLAMPING (uint64_t)1
my mistake, I just downloaded latest gfx-rs/wgpu-native
but I should have grabbed 0.5.2
... I'll try that
Ok I built wgpu.h
and libwgpu_native.dylib
again from v0.5.2
tag of wgpu-native
Different error this time:
Traceback (most recent call last):
File "/Users/paul/Documents/Dev/Personal/python-hexblade/experiments/pygfx_hexes.py", line 6, in <module>
import wgpu.backends.rs # noqa: F401, Select Rust backend
File "/Users/paul/Library/Caches/pypoetry/virtualenvs/hexblade-gi2zk5D5-py3.9/lib/python3.9/site-packages/wgpu/backends/rs.py", line 117, in <module>
ffi.cdef(_get_wgpu_h())
File "/Users/paul/Library/Caches/pypoetry/virtualenvs/hexblade-gi2zk5D5-py3.9/lib/python3.9/site-packages/cffi/api.py", line 112, in cdef
self._cdef(csource, override=override, packed=packed, pack=pack)
File "/Users/paul/Library/Caches/pypoetry/virtualenvs/hexblade-gi2zk5D5-py3.9/lib/python3.9/site-packages/cffi/api.py", line 126, in _cdef
self._parser.parse(csource, override=override, **options)
File "/Users/paul/Library/Caches/pypoetry/virtualenvs/hexblade-gi2zk5D5-py3.9/lib/python3.9/site-packages/cffi/cparser.py", line 389, in parse
self._internal_parse(csource)
File "/Users/paul/Library/Caches/pypoetry/virtualenvs/hexblade-gi2zk5D5-py3.9/lib/python3.9/site-packages/cffi/cparser.py", line 396, in _internal_parse
self._process_macros(macros)
File "/Users/paul/Library/Caches/pypoetry/virtualenvs/hexblade-gi2zk5D5-py3.9/lib/python3.9/site-packages/cffi/cparser.py", line 479, in _process_macros
raise CDefError(
cffi.CDefError: only supports one of the following syntax:
#define WGPUBufferUsage_MAP_READ ... (literally dot-dot-dot)
#define WGPUBufferUsage_MAP_READ NUMBER (with NUMBER an integer constant, decimal/hex/octal)
got:
#define WGPUBufferUsage_MAP_READ (uint32_t)1
Well there's multiple issues, but you trying this out on an M1 is definitely the biggest one :) it's a new build target and we have not looked into supporting it just yet. There are no CI runners available and I don't know anyone (except for you of course!) that owns one so it will be tricky indeed. If you managed to compile the right version of wgpu-native (0.5.2) yourself it should all just work though
the last comment I posted was with 0.5.2 wgpu-native
wgpu-native itself works ok on the Rust side... I can run the make run-example-triangle
test and a GLFW window opens up and draws a triangle
Did you also update the header file? The last error you posted is actually a failure to parse/load the wgpu.h
header file from cffi
. it seems to indicate a syntax error:
cffi.CDefError: only supports one of the following syntax:
#define WGPUBufferUsage_MAP_READ ... (literally dot-dot-dot)
#define WGPUBufferUsage_MAP_READ NUMBER (with NUMBER an integer constant, decimal/hex/octal)
got:
#define WGPUBufferUsage_MAP_READ (uint32_t)1
Here's what that last line looks like in my wgpu.h file:
$ cat wgpu.h | grep WGPUBufferUsage_MAP_READ
#define WGPUBufferUsage_MAP_READ 1
🤔 I was sure I had but I will double-check that
my grep returns:
#define WGPUBufferUsage_MAP_READ (uint32_t)1
I have double-checked and this is what is built by wgpu-native 0.5.2 on my machine
There are no CI runners available
See also https://github.com/actions/virtual-environments/issues/2187
Different error this time
The only thing I can think of is that there is a mismatch between the header-file and the compiled lib.
Otherwise, with a bit of luck things will be better once we've moved to a newer version of wgpu-native ...
I'm a bit out of my depth here, but let me know if I can help test or check anything else
I can confirm that, with the latest wgpu-py and wgpu-native build from source everything is working on M1 with a couple of caveats:
get_surface_id_from_canvas
doesn't seem to be able to recognize GLFWWindow. I replace this hacking-around-libobjc-with-ctypes code with my own hacking-around-libobjc-with-ctypes code and everything now works. I haven't figured out the differences yet.Would you mind listing the specific versions and code adjustments you've used? It might be just what we need to propose the appropriate changes upstream.
brew install llvm
is yielding an 12.0.0_1 llvm, which lets you set LLVM_CONFIG_PATH=/opt/homebrew/Cellar/llvm/12.0.0_1/bin/llvm-config
and then
bindgen='0.58.1'
inside wpgu-native's Cargo.toml gets its cargo build
to work.
@marcdownie would you be interested in submitting a PR to https://github.com/gfx-rs/wgpu-native for the required changes to get it compiling? Might also be the easiest way to discuss the get_surface_id_from_canvas
.
Pull request for getting wgpu-native to build on M1 here: https://github.com/gfx-rs/wgpu-native/pull/114
My angry hacks to get get_surface_id_from_canvas
working is harder to build a pull request from, not least of all because it's dependent on some random gist I found (https://gist.github.com/tlinnet/746a18788dd51f0827fb4840b9a8631c) which, at the very least, doesn't have a license.
I think the difference here is turning on calling methods (like contentView()
). The existing get_surface_id_from_canvas
is using a raw objc_msgSend
while I'm using a method returned from class_getInstanceMethod(objc_class, someSelector)
and calling that.
Specifically, I have:
cv = ObjCInstance(window).contentView()
cv.setWantsLayer(True)
metal_layer = ObjCClass("CAMetalLayer").layer()
cv.setLayer(metal_layer)
To replace the existing:
content_view = objc.objc_msgSend(window, content_view_sel)
...
objc.objc_msgSend(content_view, set_wants_layer_sel, True)
ca_metal_layer_class = objc.objc_getClass(b"CAMetalLayer")
metal_layer = objc.objc_msgSend(ca_metal_layer_class, layer_sel)
objc.objc_msgSend(content_view, set_layer_sel, ctypes.c_void_p(metal_layer))
My code works where get_surface_id_from_canvas
fails because on M1 objc.objc_msgSend(window, responds_to_sel_sel, ctypes.c_void_p(content_view_sel))
isn't True when it clearly should be. objc.objc_msgSend
is, famously, coupled directly to the ABI.
Meanwhile, I'm trying to inline everything from that gist so that I might actually have code you'd want in your repository, but it might take a few days for me to get to it.
Thanks for the effort! Would be great to get that code in, so others with an M1 can benefit as well :)
What is happening on this front? Im a Mac M1 user and would like to use wgpu-py. Any progress?
Looks like the mentioned https://github.com/gfx-rs/wgpu-native/pull/114 has been merged, so I imagine that it's at least possible to build for M1 now, but it doesn't look like there are prebuilt binaries yet: https://github.com/gfx-rs/wgpu-native/releases/tag/v0.9.2.2
So I guess the next step would be adjusting CI over there to provide those. Then we can easily add support here as a next step.
I just created https://github.com/gfx-rs/wgpu-native/issues/138 to track step 1.
Step 1 is done. Once there is a release of wgpu-native, we can do what's needed in wpu-py. @Korijn if you feel like starting with that, you could use the unofficial release on my fork.
What would need to be done in wgpu-py to add support? Just curious and eager to contribute
What would need to be done in wgpu-py to add support? Just curious and eager to contribute
The challenge here is that there aren't any M1 machines available on CI yet. So you need to implement a mechanism to force a new job on CI to (1) download the arm64 binaries and (2) build a wheel with the arm64 ABI tag, even though the CI job is running on a regular macos machine.
You may need to adjust this little hack here: https://github.com/pygfx/wgpu-py/blob/ab7329b7294c6cba90bc4d84000a588546516080/setup.py#L17
It mostly comes down to massaging setuptools and bdist_wheel into doing what you need it to.
Hope this helps.
I think CI and github yaml is a bit out of my depth. I updated the pointer to Almar´s custom build. I know the pointer is right because I got an error saying it could not find the lib when using the wrong path.
triangle_glfw.py gives:
Expected wgpu-native version (0, 9, 2, 2) but got (0, 0, 0)
Traceback (most recent call last):
File "/Users/simon/Documents/Coding/wgpu-py/examples/triangle_glfw.py", line 20, in <module>
main(canvas)
File "/Users/simon/Documents/Coding/wgpu-py/examples/triangle.py", line 55, in main
adapter = wgpu.request_adapter(canvas=canvas, power_preference="high-performance")
File "/Users/simon/Documents/Coding/wgpu-py/wgpu/backends/rs.py", line 215, in request_adapter
surface_id = get_surface_id_from_canvas(canvas)
File "/Users/simon/Documents/Coding/wgpu-py/wgpu/backends/rs_helpers.py", line 107, in get_surface_id_from_canvas
raise RuntimeError("Received unidentified objective-c object.")
RuntimeError: Received unidentified objective-c object.
I think the first line is just a warning. A few comments up @marcdownie discussed his hacks to get get_surface_id_from_canvas working. That to is abit out of my depth I´m afraid, but it looks like the solution is in the gist he referenced.
Expected wgpu-native version (0, 9, 2, 2) but got (0, 0, 0)
Yes, this is just a warning. Is this with the binary from the release of my fork? If so, it looks like the version is not baked-in correctly ... not something that affects anything directly, but we'd need to fix that eventually.
File "/Users/simon/Documents/Coding/wgpu-py/wgpu/backends/rs_helpers.py", line 107, in get_surface_id_from_canvas raise RuntimeError("Received unidentified objective-c object.") RuntimeError: Received unidentified objective-c object.
Why would this part be different for M1?
Is this with the binary from the release of my fork?
Yes I compiled the binary from your fork.
Why would this part be different for M1?
I´m not sure. In this comment @marcdownie discusses that "get_surface_id_from_canvas" fails on his M1 https://github.com/pygfx/wgpu-py/issues/136#issuecomment-861621333.
Looks like he solved the issue using code from the gist but had some issues making it into a pull request.
Is this with the binary from the release of my fork?
Yes I compiled the binary from your fork.
Ah, ok then it makes sense. I meant whether you downloaded it from the unofficial release of my fork :)
I´m not sure. In this comment @marcdownie discusses that "get_surface_id_from_canvas" fails on his M1 #136 (comment).
Right, I forgot how long this thread is :)
I understand that code like that feels overwhelming. I felt the same. I dug around in many (sometimes obscure) parts of the internet to piece everything together. But this is something that only someone with an M1 can do :)
I think CI and github yaml is a bit out of my depth.
I can handle that, and updating the pointer. I'll put it on a branch and open a draft PR.
Sorry for wall of text but I might have found a piece of the puzzle. Reading up on architectural differences between x86 and arm64 (m1) here: https://developer.apple.com/documentation/apple-silicon/addressing-architectural-differences-in-your-macos-code
The x86_64 and arm64 architectures have different calling conventions for variadic functions—functions with a variable number of parameters. On x86_64, the compiler treats fixed and variadic parameters the same, placing parameters in registers first and only using the stack when no more registers are available. On arm64, the compiler always places variadic parameters on the stack, regardless of whether registers are available. If you implement a function with fixed parameters, but redeclare it with variadic parameters, the mismatch causes unexpected behavior at runtime.
Bit further down on that page, discussing the function objc_msgSend
that is being called by get_surface_id_from_canvas
:
A function like objc_msgSend calls a method of an object, passing the parameters you supply to that method. Because objc_msgSend must support calls to any method, it accepts a variable list of parameters instead of fixed parameters. This usage of variable parameters changes how objc_msgSend calls your function, effectively redeclaring your method as a variadic function.
My guess is that the parameters for objc_msgSend
is in the registers on x86 and in the stack on arm64. The article goes on to explain how to solve it. I will give that a go tomorrow.
Solved it by using the code in the gist mentioned by @marcdownie. The examples run. But as I don´t understand how the code in the gist actually works I can´t really make pull request out of it. Licensing is also an issue.
I've had my eye on the easily pip-installable https://github.com/beeware/rubicon-objc as an (almost) drop in replacement for that random gist we've both now used with success, but I haven't quite gotten around to it. How do we feel about adding a dependency?
Given a little:
from rubicon.objc.api import ObjCInstance, ObjCClass
Then I can successfully patch our get_surface_id_from_canvas(canvas)
Darwin if clause with:
cv = ObjCInstance(window).contentView
cv.setWantsLayer(True)
metal_layer = ObjCClass("CAMetalLayer").layer()
cv.setLayer(metal_layer)
struct = ffi.new("WGPUSurfaceDescriptorFromMetalLayer *")
struct.layer = ffi.cast("void *", metal_layer.ptr.value)
struct.chain.sType = lib.WGPUSType_SurfaceDescriptorFromMetalLayer
I only have access to M1 Macs, but this at the very least seems like much less code. Let me know if it's worth a pull request.
How do we feel about adding a dependency?
I was hoping we could solve this without adding a dependency. However, since rubicon-objc
is pure Python and has no dependencies by itself (AFAIK), it's a relatively "safe" dependency (we would not depend on their maintainers to push new packages for new Python versions, etc).
Let me know if it's worth a pull request.
It is :)
I still think it'd be interesting to obtain the surface id without a dependency. But this is something that can be picked up later. What I'd try (if I had a mac M1) was to trace the code path through that gist (or through rubicon-objc
) and check what (ctypes) API calls are performed. I suspect you'd end up with very similar code as what we have, and we can then see what we were missing. That said, I may be underestimating this :)
I have added a list of things to fix at the top of this issue.
Another issue is the installation of cffi, this is where @anentropic got stuck on. @marcdownie, @SuperSimon81 how have you tackled that?
Short answer is that the cffi installed via conda install cffi
installs the correct _cffi_backend.cpython-39-darwin.so
(while pip install cffi
, and thus python3 setup develop
, gets you an x86_64 one).
There aren't any aarch64 releases of wgpu-native yet are there? Specifically: there's nothing I put in my pull request for download-wgpu-native.py
to make it download the correct binaries?
Actually, we're not there yet; we have problems with the latest cffi
pulled in from conda:
Traceback (most recent call last):
File "/Users/marc/temp/wgpu_pull_req/wgpu-py/examples/triangle_glfw.py", line 11, in <module>
import wgpu.backends.rs # noqa: F401, Select Rust backend
File "/Users/marc/temp/wgpu_pull_req/wgpu-py/wgpu/backends/rs.py", line 44, in <module>
from .rs_ffi import ffi, lib, check_expected_version
File "/Users/marc/temp/wgpu_pull_req/wgpu-py/wgpu/backends/rs_ffi.py", line 108, in <module>
def _logger_callback(level, c_msg):
File "/Users/marc/miniforge3/envs/wgpu_work/lib/python3.9/site-packages/cffi/api.py", line 396, in callback_decorator_wrap
return self._backend.callback(cdecl, python_callable,
MemoryError: Cannot allocate write+execute memory for ffi.callback(). You might be running on a system that prevents this. For more information, see https://cffi.readthedocs.io/en/latest/using.html#callbacks
Another issue is the installation of cffi, this is where @anentropic got stuck on. @marcdownie, @SuperSimon81 how have you tackled that?
I used conda like @marcdownie.
It's fine if that works in anaconda land but we also need to provide a solution here in pypi/pip country.
There aren't any aarch64 releases of wgpu-native yet are there? Specifically: there's nothing I put in my pull request for download-wgpu-native.py to make it download the correct binaries?
Yes there are! And I just merged #185 that makes the necessary updates here. So if you pull the latest main
and run the download script you should be up :)
MemoryError: Cannot allocate write+execute memory for ffi.callback(). You might be running on a system that prevents this.
Also posting the error that @berendkleinhaneveld got, which is yet something different than @anentropic reported. This has been reported earlier but the proposed workaround does not seem to work:
E ImportError: dlopen(/Users/cg/Library/Caches/pypoetry/virtualenvs/wgpu-5bSf_T1V-py3.9/lib/python3.9/site-packages/_cffi_backend.cpython-39-darwin.so, 2): Symbol not found: _ffi_prep_closure
MemoryError
is spurious, but good to have searchable in github. The root cause seems to be what MacOS entitlements my conda python ends up having vs other less isolated pythons floating around my system (including, confusingly, the conda base python / Python.Framework)._ffi_prep_closure
error was solved for me with conda's building from source of cffi or, equivalently, pip install cffi --no-binary :all:
I think we're a) slowly getting there b) really going to need that CI integration to feel great about this.
I'm now less convinced that we can work around the MemoryError: Cannot allocate write+execute memory for ffi.callback()
error in the long (or even immediate) term. With a completely fresh-from- brew
python3.9 environment, I can't get around this error from cffi with wgpu-py. This is known to cffi:
https://cffi.readthedocs.io/en/latest/using.html#callbacks-old-style
and is a deliberate consequence of Apple's arm64-binaries-must-be-signed policy. The only mystery right now is why my base conda env works with cffi at all.
Or we should (try to) make use of the new-style callback mechanism described there.
I'm now less convinced that we can work around the
MemoryError: Cannot allocate write+execute memory for ffi.callback()
error in the long (or even immediate) term. With a completely fresh-from-brew
python3.9 environment, I can't get around this error from cffi with wgpu-py. This is known to cffi:https://cffi.readthedocs.io/en/latest/using.html#callbacks-old-style
and is a deliberate consequence of Apple's arm64-binaries-must-be-signed policy. The only mystery right now is why my base conda env works with cffi at all.
From that page:
To fix the issue once and for all on the affected platforms, you need to refactor the involved code so that it no longer uses ffi.callback().
Looks like we are using it in two places... do you think it is possible to get rid of those usages @almarklein ?
Also posting the error that @berendkleinhaneveld got, which is yet something different than @anentropic reported. This has been reported earlier but the proposed workaround does not seem to work:
E ImportError: dlopen(/Users/cg/Library/Caches/pypoetry/virtualenvs/wgpu-5bSf_T1V-py3.9/lib/python3.9/site-packages/_cffi_backend.cpython-39-darwin.so, 2): Symbol not found: _ffi_prep_closure
This is (as described in the linked bug report) a linking error... so the installed cffi is attempting to load a dynamic library that doesn't exist. This is a problem caused by CFFI which they would need to solve in their build pipeline... you could try to run otool -L <binary>
on _cffi_backend.cpython-39-darwin.so
to see what it's trying to load.
Let's move the discussion about cffi to #190
do you think it is possible to get rid of those usages @almarklein ?
Worth a try!
Reference #194 which adds macos arm64 wheels to CI.
Now that #195 has also been merged, we only have problems with cffi remaining.
Has anyone tried the latest cffi pre-release yet?
The MemoryError is the only remaining issue at this point, and so far only @marcdownie has reported it with a conda environment... @berendkleinhaneveld has been able to run multiple examples on M1 now with all the changes. I'm inclined to say we should close this and #190, and release some new wheels to pypi! 🚀
Overview (edited by AK):
get_surface_id_from_canvas
work for M1. #195I was getting this error:
https://nomodulenamed.com/m/wheel.pep425tags says:
But:
https://wheel.readthedocs.io/en/stable/news.html
...sounded possibly relevant. The previous version to that is
0.34.2
which was also mentioned on the nomodulenamed page
And yes, this fixed it:
and poetry install then succeeded.
I'm not totally sure where the problem originates, whether you need to pin
wheel==0.34.2
in yoursetup.py
or some other part of the build machinery is to blame.(Pinning
wheel==0.34.2
in mypyproject.toml
didn't help, because Poetry didn't know about the dependency relationship and tries to installwgpu
first... so for now it's fixed by manually installing oldwheel
version in my virtualenv)Just posting this in case it helps someone else.