Open ashesfall opened 2 years ago
Do you want to cross-compile from an x86 machine or compile natively on an M1?
I've got a branch where native compilation works, but you need to run the bazel build as the superuser due to some library discovery issue I've not figured out. https://github.com/tensorflow/java/issues/394#issuecomment-981722302
I don't think we know how to cross-compile it from an x86 Mac.
The goal was compiling directly on the M1.
And yes I eventually found your old discussion and have succeeded.
When will this become more official?
When I or someone else figures out how to compile it without needing to run bazel as root we'll merge it into master. We won't be able to do builds for it due to a lack of appropriate build resources unless we manage to figure out cross compiling.
Okay. You can't lose this ticket.
Oh one last thing. Is integration with tensorflow-metal on the road map either?
We don't currently expose TF_LoadPluggableDeviceLibrary
which is the entry point for the pluggable device infrastructure. If we did, then I think you should be able to download the tensorflow-metal whl, unzip it and then load it with that function. As far as I can tell tensorflow-metal is closed source, so I don't think we'd be able to repackage it for Java.
Apple's docs do say "V1 TensorFlow Networks" are unsupported, but I'm not sure what they mean by that.
Yea it can't be included, but it would be great if I could load the pluggable device. You can close the ticket.
System information
java -version
): 17When building core-api I can see from the logs it is targeting x86_64. I wish to target Aarch64.
./build.sh in core-api