Closed ubamrein closed 3 years ago
Now I can create an xcframework again!
Does that mean that this issue can be closed?
Well not really, since the ios and mac targets have two issues that I seen so far:
With catalyst and simulator builds, we have two platforms than match x86_64-apple-ios, and rustc's llvm will create a "fat" binary containing both, catalyst and simulator (the actual llvm_target should be x86_64-apple-ios-simulator)
There is no way of setting a OS target, which let's llvm choose the lowest available (7.0). This breaks support with various novel Apple/Mac tools (as e.g. explained with xcframework). Since the plattform is deprecated, it is not clear for how long the fallbacks which are currently in place for older toolchains will work (e.g. normal xcode builds).
So to summarize:
The llvm targets for ios/darwin need to specify the version (as in e.g. aarch64-apple-ios13.0) (which should probably best be set via a cargo option or env var). Further, since maccatalyst is supported, the simulator target should set its environment to simulator to prevent llvm from generating a "fat" binary.
I also encountered this issue when trying to put together an xcframework
. @ubamrein I don't quite understand your workaround. Is there an example of how to pass that config during the build step? When I put that config in a json file and pass it as the --target
during cargo rustc
or cargo build
, cargo simply tells me the target may not be installed.
You have to rebuild the standard library in order to use a custom target as mentioned in: https://doc.rust-lang.org/nightly/rustc/targets/custom.html. Write the above target-spec into a file named aarch64-apple-ios11.0.json
. Modify the llvm-target
line to indicate 11.0 (or your desired version) and then invoke your build like:
cargo +nightly build -Z build-std --target aarch64-apple-ios11.0.json
I should note, however, that my resulting static library does not include the LC_BUILD_VERSION
load command. It simply includes:
Load command 1
cmd LC_VERSION_MIN_IPHONEOS
cmdsize 16
version 11.0
sdk n/a
Nonetheless, it can be inserted into an xcframework
using:
% xcodebuild -create-xcframework \
-library target/aarch64-apple-ios11.0/debug/libfoo.a \
-output Foo.xcframework
xcframework successfully written out to: /.../foo/Foo.xcframework
If you set the version high enough, it appears you do get the newer load command:
Load command 1
cmd LC_BUILD_VERSION
cmdsize 24
platform 2
minos 14.1
sdk n/a
ntools 0
If you set the version high enough, it appears you do get the newer load command:
Load command 1 cmd LC_BUILD_VERSION cmdsize 24 platform 2 minos 14.1 sdk n/a ntools 0
Ah yes, sorry my answer was edited multiple times, so this probably did not come across. Those load commands are emitted for the newer llvm-targets automatically (XCode itself is using the same LLVM-Targets).
@ubamrein have you been able to archive an iOS app that includes an xcframework built this way? I'm getting:
ld: Invalid record for architecture arm64
For our CI we use the following script (note we also had problems with the new aarch64-simulator target, so we excluded that in the build process).
The script essentially checks for a valid rust installation on the CI, then builds the standard library for the new ios target, generates C-Bindings with cbindgen, and finally combines headers and static lib to a xcframework.
Hopefully this helps you :)
#!/bin/bash
#fail script if a command fails
set -e
#create temp dir
tmpdir=$(mktemp -d 2>/dev/null || mktemp -d -t 'mytmpdir')
echo "Created tempdir at $tmpdir"
function cleanup {
rm -rf "$tmpdir"
echo "Deleted temp working directory $tmpdir"
}
trap cleanup EXIT
if !(rustup toolchain list | grep -q "nightly";) then
echo "install nightly toolchain"
rustup toolchain install nightly
fi
#install rust-src for nightly
rustup +nightly component add rust-src
swift_module_map() {
echo 'module lib{'
echo ' header "lib.h"'
echo ' export *'
echo '}'
}
echo "Building Architectures..."
XCFRAMEWORK_ARGS=""
for ARCH in "x86_64-apple-ios" "aarch64-apple-ios"
do
COMMAND="cargo +nightly build --release -Z build-std=core,std,alloc --manifest-path rust/lib-ios/Cargo.toml --target rust/lib-ios/$ARCH.json --target-dir $tmpdir"
echo $COMMAND
$COMMAND
cbindgen --config rust/lib-ios/cbingen.toml --crate lib-ios --output "$tmpdir/$ARCH/release/headers/lib.h" rust/lib-ios
XCFRAMEWORK_ARGS="${XCFRAMEWORK_ARGS} -library $tmpdir/$ARCH/release/lib.a"
XCFRAMEWORK_ARGS="${XCFRAMEWORK_ARGS} -headers $tmpdir/$ARCH/release/headers/"
swift_module_map > "$tmpdir/$ARCH/release/headers/module.modulemap"
done
echo "Creating lib.xcframework..."
rm -rf ios/lib.xcframework
XCODEBUILDCOMMAND="xcodebuild -create-xcframework $XCFRAMEWORK_ARGS -output ios/lib.xcframework"
echo $XCODEBUILDCOMMAND
$XCODEBUILDCOMMAND
@ubamrein thanks for the script, it helped me debug. I am actually doing something pretty similar to you however I'm building fat binaries with both aarch64
and x86_64
slices and building for macos
, ios
, ios-simulator
, and ios-macabi
.
After ruling out most of the differences between our two approaches, I narrowed the issue down to build profile.
Trying to archive an iOS app with a debug/dev build of a rust library packaged as an xcframework results in the ld: Invalid record for architecture arm64
error. But, switching to the release profile yields:
ld: could not reparse object file in bitcode bundle: 'Unknown attribute kind (68) (Producer: 'LLVM12.0.0-rust-1.52.0-nightly' Reader: 'LLVM APPLE_1_1200.0.32.29_0')', using libLTO version 'LLVM version 12.0.0, (clang-1200.0.32.29)' for architecture arm64
Which seems to indicate (1.52-nightly) Rust's llvm includes attributes that Xcode's version doesn't know about yet. Using the 2020-12-31
nightly resolves the issue.
For posterity, my script looks like:
#!/bin/sh
set -ex
: "${LIBNAME:=libfoo}"
: "${OUTNAME:=FooRust}"
: "${TOOLCHAIN:=nightly-2020-12-31}"
: "${PROFILE:=release}"
: "${PROFDIR:=$PROFILE}"
: "${MACVER:=10.7}"
: "${IOSVER:=14.1}"
PLATFORMS="
apple-darwin$MACVER
apple-ios$IOSVER
apple-ios$IOSVER-simulator
apple-ios$IOSVER-macabi
"
suffixes=$(mktemp -d)
echo "macos" > $suffixes/apple-darwin$MACVER
echo "ios" > $suffixes/apple-ios$IOSVER
echo "ios-simulator" > $suffixes/apple-ios$IOSVER-simulator
echo "ios-macabi" > $suffixes/apple-ios$IOSVER-macabi
ARCHS="
aarch64
x86_64
"
subarchs=$(mktemp -d)
echo "arm64v8" > $subarchs/aarch64
echo "x86_64" > $subarchs/x86_64
xc_args=""
for PLATFORM in $PLATFORMS
do
lipo_args=""
for ARCH in $ARCHS
do
triple="$ARCH-$PLATFORM"
cargo +$TOOLCHAIN build \
-Z unstable-options --profile $PROFILE \
-Z build-std \
--target "$triple.json"
larch=$(< $subarchs/$ARCH)
lipo_args="$lipo_args -arch $larch target/$triple/$PROFDIR/$LIBNAME.a"
done
suffix=$(< $suffixes/$PLATFORM)
lipo -create $lipo_args -output $LIBNAME-$suffix.a
xc_args="$xc_args -library $LIBNAME-$suffix.a"
xc_args="$xc_args -headers include"
done
xcodebuild -create-xcframework $xc_args -output $OUTNAME.xcframework
The reason I don't have a module map is because I'm packaging this xcframework using the swift package manage, which will automatically generate one for you for binary xcframework targets (:
@dcow I have a similar thing working with current nightlies. You just need to use jq
to modify the target's json to have the correct llvm target, which your script forgets to do if I'm reading it right. My code for this lives over at https://github.com/cormacrelf/CiteprocRsKit in the Scripts directory. Bit messy but it works.
Irrelevant to rust-lang/rust, but I did things a bit differently:
LLVM_TARGET_TRIPLE_*
env variables to create such a triple that works correctly even for the simulator. You can also get the list of archs from $ARCHS etc, as Carthage will build one PLATFORM_NAME at a time with all the ARCHS at once.jq
thing is enough for your script.
@dcow I have a similar thing working with current nightlies. You just need to use
jq
to modify the target's json to have the correct llvm target, which your script forgets to do if I'm reading it right.
I just have different target JSON files rather than modifying a single one with jq
.
...to create such a triple that works correctly even for the simulator...
I'm able to run on the simulator using the script I posted above, that all works fine. After running fat.sh
, my project looks like:
Cargo.lock
Cargo.toml
README.md
FooRust.xcframework
aarch64-apple-darwin10.7.json
aarch64-apple-ios14.0-macabi.json
aarch64-apple-ios14.0-simulator.json
aarch64-apple-ios14.0.json
clean.sh
fat.sh
include
iphone.sh
libfoo-ios-macabi.a
libfoo-ios-simulator.a
libfoo-ios.a
libfoo-macos.a
src
target
tests
thin.sh
x86_64-apple-darwin10.7.json
x86_64-apple-ios14.0-macabi.json
x86_64-apple-ios14.0-simulator.json
x86_64-apple-ios14.0.json
We're not using Carthage, just pure SwiftPM. I have a swift package project that includes Swift code to interface with the FFI. I build the binary xcframework using the artifacts from the crate where I've added my FFI. In the swift package I have a directory where I copy the xcframework and use it as a binary target. There is a swift target containing code to interface with the rust crate via the published FFI, which depends on the binary target. I guess my goal was to have this working in swift package manager and building via the swift package
command rather than relying on xcode to do the heavy lifting.
The swift package looks like:
.
├── Libs
│ └── FooRust.xcframework
│ ├── Info.plist
│ ├── ios-arm64_x86_64
│ │ ├── Headers
│ │ │ └── foo.h
│ │ └── libfoo-ios.a
│ ├── ios-arm64_x86_64-maccatalyst
│ │ ├── Headers
│ │ │ └── foo.h
│ │ └── libfoo-ios-macabi.a
│ ├── ios-arm64_x86_64-simulator
│ │ ├── Headers
│ │ │ └── foo.h
│ │ └── libfoo-ios-simulator.a
│ └── macos-arm64_x86_64
│ ├── Headers
│ │ └── foo.h
│ └── libfoo-macos.a
├── Package.swift
├── README.md
├── Sources
│ ├── Foo
│ │ └── Foo.swift
│ └── FooC
│ ├── dummy.c
│ └── include
│ └── foo.h
└── Tests
├── LinuxMain.swift
└── Foo-swiftTests
├── FooC_swiftTests.swift
├── Foo_swiftTests.swift
└── XCTestManifests.swift
We do use xcode, of course, and I may try to get what you have working over in CiteprocRSKit setup for us, so thanks for the leads. I do get the impression Apple is leaning into xcframeworks for integrating binary artifacts into the swift ecosystem although I absolutely respect the aesthetic beauty of getting the build working by passing everything through via Xcode. Building for all platforms is an annoying kink in the workflow but not a showstopper for our use case and not without it's own advantages/tradeoffs. It would be cool if you could create a dynamic "script" target using swiftpm that would have access to all the appropriate env vars and just call out to cargo. The artifacts of such could be specified in the build script and then included normally in the appropriate search paths.
Just wanted to provide a little rationale in response to your "why do it this way" and "never build pure rust xcframework" questions/comments.
Back to rust stuff: I tested 1.55 nightly with the new Xcode 13 beta (which uses llvm 12+). I no longer have an issue building or archiving. I think that confirms the llvm version mismatch hypothesis. It's possible this scenario could happen again in the future though, so it may always be something to watch out for. And it wouldn't happen if we didn't have to use rust nightly to build xcframeworks. So the original issue still stands: building an xcframework requires rust nightly because it requires a custom target because the main rust does not use a sufficiently specified llvm target.
During this whole debugging process, I stumbled upon https://github.com/getditto/rust-bitcode, allowing to build rust with a specific Apple llvm-backend, to allow the usage of bitcode. Since I had the compiler checked out anyways, I tested it and it seems to work with current upstream rust.
Update from me, which is of course still off-topic for the Rust repo, but hopefully useful.
Doing it the fully Xcode way finally hit a snag. My Swift code needs to re-export items from the FFI headers. Swift is generally bad at this, and if you do, it gets in the way of using module stability to make the final product work across different swift compilers than it was compiled with. There isn't an obvious way (using custom modulemaps) to bring the ffi module into scope in the .swiftinterface file. So it just says "no such module YourRustFFIModule". Works fine when the swift compiler versions match. Works fine if you don't re-export anything. Use a different Xcode with re-exports and you're in trouble.
So if @dcow'a solution can do this, it would have the advantage. I suspect it can, because it seems like consumers would be able to compile your package from source and so the swiftc version mismatch is irrelevant. However I am guessing that it requires providing a download URL for the Rust xcframework to pop in the Package.swift, to avoid placing many versions of multiple large binary blobs in git. I will maybe give this a go.
Rust 1.55 works fine with XCode 13 (13.1), however for 1.56 ld
fails with some unknown attribute, which is seems to be related to the different llvm version (llvm 13 for 1.56, llvm 12 for XCode 12)
Without bitcode builds fine.
Hi
We are currently trying to build a xcframework, which includes a rust static library as a binary target. When we build the xcframework with
xcodebuild -create-xcframework -library target/aarch64-apple-ios/release/libxcframework_test.a -headers test.h -output test.xcframework
we getThe CodingKeys(stringValue: "SupportedPlatform", intValue: nil) is empty in library -arm64.
.To reproduce the failure just build a static lib with a function in it e.g.:
cargo build --release --target aarch64-apple-ios
Interestingly enough, the binary works when statically linked during the usual build process in XCode (so the code itself seems to be correct).
Further, for the darwin binary (target macos) the xcframework creation process succeeds.
I think with rustc 1.43 the xcframework also worked for the ios platform, but other than that we have no idea what is wrong.
It is though certainly somehow linked to rust, as C/C++ libraries work (e.g. libsodium) with exact the same command/folder-structure.
[EDIT] We found that LC_VERSIONMIN* is emitted instead of LC_BUILD_VERSION in the load commands of the MachO binary. Apparently this is the newly used command to specify the platform.
As a comparison: XCODE:
RUST:
[EDIT 2] Getting closer found https://github.com/rust-lang/rust/issues/29664 and made a new target, where I set the llvm-target to the one with the correct version like this:
Now I can create an xcframework again!