tensorflow / tensorflow

An Open Source Machine Learning Framework for Everyone
https://tensorflow.org
Apache License 2.0
186.29k stars 74.3k forks source link

Cannot install TensorFlowLiteSwift with CocoaPods #25800

Closed eospi closed 5 years ago

eospi commented 5 years ago

System information

I'm excited to see that TensorFlow Lite for Swift is available! https://github.com/tensorflow/tensorflow/tree/master/tensorflow/lite/experimental/swift

I tried installing it with CocoaPods using the instructions at the link above, but I get the following error. Is the CocoaPod available yet? Thanks! [!] Unable to find a specification for TensorFlowLiteSwift

ghost commented 5 years ago

Hi eospi, thank you for trying out the TensorFlow Lite Swift library. Unfortunately, the CocoaPod is not yet available. Will remove the instructions from the README. Sorry for the confusion.

bgogul commented 5 years ago

Thanks, @temrich. I will close this issue then.

jpangburn commented 5 years ago

Hi, it seems these instructions have been added back as it says you should put this line in your podfile: pod 'TensorFlowLiteSwift'

But seems that it's still not available. The instructions came back in this commit 76e879d1c18c74ee5cbc5a1162ff254a0cfb221b

Maybe I'm just jumping the gun and this will show up in CocoaPods after a nightly build or something?

ghost commented 5 years ago

Hey Jesse, At the moment the CocoaPods have not been pushed publicly. Hoping to do that soon.

In the meantime (if you want), you can play around with the pods locally.

You will need to configure the TensorFlowLiteC podspec which requires building the TensorFlowLiteC.framework. You can do that using Bazel by following these steps. Then you will need to push the TensorFlowLiteC.podspec to a local CPDC repo and point your Podfile to the local cpdc source and local TensorFlowLiteSwift or TensorFlowLiteObjC pods, which will be located in the root tensorflow directory after you follow the Getting Started steps.

If you decide to test locally, please let me know if you run into any issues. Again, hoping to push these pods to public CPDC over the next few days.

jpangburn commented 5 years ago

Hi @temrich I followed your steps (thank you) with the following results:

File "/private/var/tmp/_bazel_jpangburn/820d999d6733d6499781c2d354fc5533/execroot/org_tensorflow/bazel-out/host/bin/external/build_bazel_rules_apple/tools/codesigningtool/codesigningtool.runfiles/build_bazel_rules_apple/tools/codesigningtool/codesigningtool.py", line 107, in _filter_codesign_output for line in codesign_output.split("\n"): TypeError: a bytes-like object is required, not 'str'

I'm new to bazel so maybe there's some obvious signing step I should have implicitly done, like configuring a wildcard provisioning profile or something. So I hoped the bazel test ... step was unnecessary and proceeded.

ERROR: /Users/jpangburn/Documents/tensorflowlite/tensorflow/tensorflow/lite/experimental/swift/BUILD:17:1: Compiling Swift module TensorFlowLite failed (Exit 1): bazel_xcode_wrapper failed: error executing command (cd /private/var/tmp/_bazel_jpangburn/820d999d6733d6499781c2d354fc5533/execroot/org_tensorflow && \ exec env - \ APPLE_SDK_PLATFORM=iPhoneOS \ APPLE_SDK_VERSION_OVERRIDE=12.1 \ XCODE_VERSION_OVERRIDE=10.1.0 \ bazel-out/host/bin/external/build_bazel_rules_swift/tools/wrappers/bazel_xcode_wrapper bazel-out/host/bin/external/build_bazel_rules_swift/tools/wrappers/swift_wrapper /usr/bin/xcrun swiftc '-Xwrapped-swift=-ephemeral-module-cache' @bazel-out/ios_arm64-dbg/bin/tensorflow/lite/experimental/swift/TensorFlowLite.swiftmodule-0.params @bazel-out/ios_arm64-dbg/bin/tensorflow/lite/experimental/swift/TensorFlowLite.swiftmodule-1.params) Execution platform: @bazel_tools//platforms:host_platform /Users/jpangburn/Documents/tensorflowlite/tensorflow/tensorflow/lite/experimental/swift/Sources/Interpreter.swift:59:69: error: value of optional type 'CVaListPointer?' must be unwrapped to a value of type 'CVaListPointer' let message = String(cFormat: cFormat, arguments: arguments)

So I edited the Interpreter.swift file and commented out lines 54-67 because the problem was occurring inside a section about error logging (nice to have of course, but I assumed not critical) and I am pretty new to Swift so I didn't know the right way to fix this. The build completed, successfully it appears as it says it produced this library file:

Copying libTensorFlowLite.a to /Users/jpangburn/Library/Developer/Xcode/DerivedData/TensorFlowLite-adluxtrdxpqerubkxpcnjtavgswx/Build/Products/Debug-iphoneos/libtensorflow-lite-experimental-swift-TensorFlowLite.a

pod 'TensorFlowLiteC', :path => '/Users/jpangburn/Documents/tensorflowlite/tensorflow/tensorflow/lite/experimental/c/' pod 'TensorFlowLiteSwift', :path => '/Users/jpangburn/Documents/tensorflowlite/tensorflow/'

Then I did a pod install --repo-update and it generated an Xcode workspace file.

no such module 'TensorFlowLiteC'

It appears the Swift code can't import that module. So my workaround for not knowing what CPDC is and instead doing the above thing in the Podfile didn't really work- just allowed it to make that workspace file but not for it to really work. Being new to CocoaPods I'm guessing here but figured the vendored_frameworks line in the TensorFlowLiteC.podspec file needs to actually point at the framework. So copied TensorFlowLiteC.podspec to the root tensorflow directory and copied the framework file there too and changed that line to:

s.vendored_frameworks = 'TensorFlowLiteC.framework'

Changing the Podfile to point the TensorFlowLiteC path also to the root tensorflow directory, this time when I generated the workspace the TensorFlowLiteC pod actually showed the framework inside it. But when I tried to run the project I got:

:1:9: note: in file included from :1: #import "Headers/TensorFlowLiteC.h" ^ /Users/jpangburn/Documents/tensorflowlite/tensorflow/TensorFlowLiteC.framework/Headers/TensorFlowLiteC.h:1:9: note: in file included from /Users/jpangburn/Documents/tensorflowlite/tensorflow/TensorFlowLiteC.framework/Headers/TensorFlowLiteC.h:1: #import "c_api.h" ^ /Users/jpangburn/Documents/tensorflowlite/tensorflow/TensorFlowLiteC.framework/Headers/c_api.h:24:10: error: 'tensorflow/lite/context.h' file not found #include "tensorflow/lite/context.h" ^ /Users/jpangburn/Documents/tensorflowlite/tensorflow/tensorflow/lite/experimental/swift/Sources/Interpreter.swift:16:8: error: could not build Objective-C module 'TensorFlowLiteC' import TensorFlowLiteC ^

So I guess that framework that got built doesn't include the necessary .h files? Tried to work around this by adding a -I flag in the xcconfig file for the TensorFlowLiteSwift pod, but then the error changes to:

/Users/jpangburn/Documents/tensorflowlite/tensorflow/TensorFlowLiteC.framework/Headers/TensorFlowLiteC.h:1:9: note: in file included from /Users/jpangburn/Documents/tensorflowlite/tensorflow/TensorFlowLiteC.framework/Headers/TensorFlowLiteC.h:1:

import "c_api.h"

^ /Users/jpangburn/Documents/tensorflowlite/tensorflow/TensorFlowLiteC.framework/Headers/c_api.h:24:10: error: include of non-modular header inside framework module 'TensorFlowLiteC.c_api': '/Users/jpangburn/Documents/tensorflowlite/tensorflow/tensorflow/lite/context.h'

Seems to be that it doesn't like having stuff in the framework include headers that are outside the framework. So I'm guessing the framework isn't built correctly and it needs to include that context.h header and whatever other headers are chained from there.

Stopping for right now on this, hope the above comments are useful. At least the early problems like the "bazel test" problem and the "CVaListPointer" problem. Thanks again for your help!

ghost commented 5 years ago

You will only need Bazel to generate the TensorFlowLiteC.framework, which will produce a zip file. Once you have the zip file, you can unzip it in the tensorflow root directory which is where the TensorFlowLiteC.podspec will look once it has been pushed to your local CocoaPods repo. unzip bazel-bin/tensorflow/lite/experimental/c/TensorFlowLiteC_framework.zip -d /Users/path/to/tensorflow/Frameworks

This line will need to be updated to point to your local TF git repo that contains the TensorFlowLiteC.podspec and the TensorFlowLiteC.framework. Update to the following: `s.source = { :git => '/Users/path/to/local/tensorflow/.git' }

Regarding local CPDC, you are correct, was referring to a local repo, for example: cd ~/.cocoapods/repos pod repo add <local_repo_name> /Users/path/to/local/tensorflow

At this point, you should be able to push the TensorFlowLiteC.podspec to your local CocoaPods repo, for example: pod repo push <local_repo_name> Users/path/to/local/tensorflow/tensorflow/lite/experimental/c/TensorFlowLiteC.podspec

In your Podfile, you will need the following: `source '/Users//.cocoapods/repos/'

Point your Podfile to the TensorFlowLiteSwift or TensorFlowLiteObjC podspecs: pod 'TensorFlowLiteSwift', :path => '/Users/path/to/local/tensorflow' or pod 'TensorFlowLiteObjC', :path => '/Users/path/to/local/tensorflow'

Hopefully these steps will help, but please let me know if you still are running into build issues. Really appreciate you trying this out!

jpangburn commented 5 years ago

OK, here are the steps I took and results.

Double checked spelling then tried pod lib lint TensorFlowLiteC.podspec and it doesn't have the error, just a warning that "Git sources should specify a tag".

The only difference now from before is that under the "Development Pods" group in the Xcode Pods project- the TensorFlowLiteC pod has moved out of there and into the "Pods" group. The "TensorFlowLiteSwift" pod remains in the "Development Pods" group. Ultimately, the problem seems the same because the TensorFlowLiteSwift pod's Interpreter.swift imports the TensorFlowLiteC module which causes the compiler to import the TensorFlowLiteC.h file in the TensorFlowLiteC.framework, which imports c_api.h which imports the "tensorflow/lite/context.h" which is not visible to the compiler. I was hoping this line in the Podfile that you mentioned was going to cause that to be set properly "source '/Users/jpangburn/.cocoapods/repos/localtfrepo'" but it didn't seem to matter. I also tried moving that line under the target in the Podfile but no change. I assume if I change the compiler flags to look at my tensorflow directory for includes that I'll just get the "non-modular header" import problem again. Last time, I tried the setting to allow non-modular headers in both the Pods and my app build settings, but it made no difference.

I'm happy to try some more, or wait for the pod so as not to take up more of your time. For me, this project will save me having to make ObjC bridge files and map data to unsafe pointers to feed data to my model. So it's worth playing with :-) Thanks again for your time!

ghost commented 5 years ago

I was able to repro the issue you are running into for a Swift project. Doesn't appear to be an issue with an Objective-C project. The TensorFlowLiteC framework needs to be updated a bit to make it modular. Right now, the framework depends on tensorflow/lite/context.h and tensorflow/lite/c/c_api_internal.h that are not located in the tensorflow/lite/experimental/c directory. We can configure this when we generate the TensorFlowLiteC.framework for the public TensorFlowLiteC CocoaPod, but it's a bit of a pain when developing locally. Working on a fix for that.

In the meantime, you can do the following to get this working locally:

  1. Update the ios_static_framework target, then regenerate the framework: ios_static_framework( name = "TensorFlowLiteC_framework", hdrs = [ "c_api.h", "//tensorflow/lite:context.h", "//tensorflow/lite/c:c_api_internal.h", ], bundle_name = "TensorFlowLiteC", minimum_os_version = "9.0", version = ":TensorFlowLiteC_version", deps = [":c_api"], )

  2. When you get to the stage where you are ready to build the Swift project, you will still hit compiler errors with not being able to find the context.h and c_api_internal.h headers. You just need to strip the the tensorflow/lite and tensorflow/lite/c paths from those includes and it should build. Reason is that the TensorFlowLiteC.framework adds its public headers to the TensorFlowLiteC.framework/Headers directory and doesn't maintain the TF directory structure.

Please let me know if you still run into any issues.

jpangburn commented 5 years ago

That worked on the simulator, thanks a lot! After it built successfully I was able to init an Interpreter object using an absolute path to a tflite model file, and called .allocateTensors() and .invoke() on it with no errors. So it was able to process my model (with whatever default values were inside the tensors after the allocation). For me, this is enough for now as I have plenty to do learning this API and the rest of the project to use it from.

That said, you might want to take a look at the following items as you make the public version:

Again, thanks for your help, very excited to play with this! Don't worry about these last items on my behalf unless you want to :-) If you do, I'll gladly test it. But I imagine for me and anyone else following along at this point, we'll have our hands full on the simulator- unless someone is using a camera to gather data I guess, which I'm not.

ghost commented 5 years ago

That's great to hear! Regarding not compiling/linking on a real device, we will definitely look into that before pushing the pod public. Appreciate the feedback.

For loading the model via a relative path, can you provide the code you are using?

ghost commented 5 years ago

Also, for "Doesn't compile for real device. Commenting Interpreter.swift lines 54-67 resolved that.", is that only when Bitcode is enabled for your app? If you disable Bitcode (temporarily), does the app build successfully on a real device?

If that's a separate issue, can you provide the compiler error that you see prior to commenting out lines 54-67?

Thank you!

jpangburn commented 5 years ago

Yes, disabling Bitcode allowed the app to build/run on a real device.

The compiler error prior to commenting out 54-67 is that CVaListPointer error:

/Users/jpangburn/Documents/tensorflowlite/tensorflow/tensorflow/lite/experimental/swift/Sources/Interpreter.swift:59:69: error: value of optional type 'CVaListPointer?' must be unwrapped to a value of type 'CVaListPointer' let message = String(cFormat: cFormat, arguments: arguments) ^ /Users/jpangburn/Documents/tensorflowlite/tensorflow/tensorflow/lite/experimental/swift/Sources/Interpreter.swift:59:69: note: coalesce using '??' to provide a default when the optional value contains 'nil' let message = String(cFormat: cFormat, arguments: arguments) ^ ?? <#default value#> /Users/jpangburn/Documents/tensorflowlite/tensorflow/tensorflow/lite/experimental/swift/Sources/Interpreter.swift:59:69: note: force-unwrap using '!' to abort execution if the optional value contains 'nil' let message = String(cFormat: cFormat, arguments: arguments) ^ !

For loading the model, here's the code I use that works- for now just changed the viewDidLoad in the ViewController for a default Single Page iOS app)

override func viewDidLoad() {
    super.viewDidLoad()
    // Do any additional setup after loading the view, typically from a nib.
    do {
        let interpreter : Interpreter = try Interpreter(modelPath: "/Users/jpangburn/Documents/Xcode_projects/TestWorkspace/TFLiteSwiftCocoaPodTestApp/TFLiteSwiftCocoaPodTestApp/vggish_converted.tflite")
        try interpreter.allocateTensors()
        try interpreter.invoke()
    } catch {
        print("caught exception initing model")
        return
    }
    print("completed viewDidLoad")
}

The actual model name is "vggish_converted.tflite". Changing it to those relative paths doesn't work, and the debugger won't go down into the C code to see why (.e.g the path is relative from somewhere else).

ghost commented 5 years ago

For the Bitcode issue, mind regenerating the framework with the following (see Bazel CROSSTOOL change to support Bitcode): bazel build tensorflow/lite/experimental/c:TensorFlowLiteC_framework -c fastbuild --ios_multi_cpus=x86_64,armv7,arm64 --apple_bitcode=embedded

For the compiler error for CVaListPointer? type, can you replace the call to the ErrorReporter with the following (appears to be a Swift compiler bug):

TFL_InterpreterOptionsSetErrorReporter(
  cOptions,
  { (_, format, args) -> Void in
    let optionalArgs: CVaListPointer? = args
    guard let cFormat = format,
          let arguments = optionalArgs,
          let message = String(cFormat: cFormat, arguments: arguments)
    else {
      return
    }
    print(String(describing: InterpreterError.tensorFlowLiteError(message)))
  },
  nil
)

We will push these updates shortly. For the relative path to model issue, still looking into this.

jpangburn commented 5 years ago

The compiler error fix to CVaListPointer? worked great!

The Bitcode issue was not resolved on my machine. I took the following steps:

  1. deleted tensorflow/Frameworks/TensorFlowLiteC.framework
  2. bazel clean
  3. Cut'n'paste your updated bazel command, it completed with no errors
  4. copied the new TensorFlowLiteC.framework to tensorflow/Frameworks/TensorFlowLiteC.framework
  5. edited the c_api.h and context.h to fix those include paths
  6. clean and build the project in Xcode

Got the same linker error but the compiler error was gone (as mentioned above). Disabled Bitcode on the project to verify the app still starts up and it does. I see that change is relatively new, so I checked my bazel version and it's 0.23.2 which appears to be released nearly a month after that commit and I checked bazel help build | grep apple and I see the apple_bitcode flag with embedded option so I don't see any issue there. Not sure what else to try for that?

ghost commented 5 years ago

Can you try also passing --copt=-fembed-bitcode?

bazel build tensorflow/lite/experimental/c:TensorFlowLiteC_framework -c fastbuild --ios_multi_cpus=x86_64,armv7,arm64 --apple_bitcode=embedded --copt=-fembed-bitcode

jpangburn commented 5 years ago

Looks like that worked! Bitcode is enabled on my project and it compiled and executed on a real device. I checked to make sure it didn't break the simulator and it works fine there too.

So the relative path thing is all you have left to figure out before you publish the pod? Or maybe you have a laundry list :-) Either way, pretty exciting stuff.

ghost commented 5 years ago

That's good to hear that everything is working now! Thanks again for your all your help with testing out the pipeline!

Regarding loading a mode file, are you trying to load the file from the main bundle or a custom bundle? Did you try something like this: https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/experimental/swift/TestApps/TensorFlowLiteApp/TensorFlowLiteApp/ViewController.swift#L95

jpangburn commented 5 years ago

Yeah, sorry my prior iOS work was with Cordova, I'm just starting out on Swift. I know nothing of bundles, and figured I could just use a relative path. I had been playing with this example SpeechCommands and mistakenly thought it was a simple relative path to the model file. After your comment I dug into it and it also uses this bundle thing. Looks straightforward to use. Sorry for the ignorant mistake!

BTW, I checked the tensor sizes reported with this Swift API vs what that example was writing into unsafe pointers and they match up perfectly. So I don't see any obvious issues left. You're welcome for the help testing, looking forward to making apps with the cocoapods for this- so easy!

ghost commented 5 years ago

No problem at all! Please feel free to open up a new issue if you run into any new problems.

Hoping to push the new CocoaPods public over the next week or two.

jpangburn commented 5 years ago

The switches on that last bazel command that allow bitcode to work on the framework compilation are horrible for performance from my initial tests- you may want to investigate this a bit before pushing those CocoaPods. I converted the SpeechCommands sample that I linked above to use this Swift API and tested the latency to run the interpreter.invoke() on the model from that sample with this code:

    dateBefore = Date()
    try interpreter.invoke()
    dateAfter = Date().timeIntervalSince(dateBefore) * 1000.0

Using the framework compiled with bitcode support on a real device it took ~1200 ms vs ~72 ms without the bitcode support. On a simulator it was ~400 ms with bitcode support vs ~11 ms without. These are all debug builds running from Xcode.

ghost commented 5 years ago

Are you running in Release or Debug mode when targeting a real iOS device? Also, the command to generate the framework is using -c fastbuild rather than -c opt, which may affect performance as well. We need to wait for Bazel 0.24 to be able to use -c opt and bitcode.

If you are seeing this in Release mode, can you please open a new issue with the details you provided above? Thank you!

jpangburn commented 5 years ago

I was running in Debug mode for both. I edited my scheme to use Release mode for running, cleaned and rebuilt, and the performance is about the same. I'll open a new issue, thanks!

willbattel commented 5 years ago

@temrich is there a rough estimate of when we might see a publicly available TensorFlowLiteSwift pod made available?

ghost commented 5 years ago

Hoping to push them to public CPDC this week. Will post here once they have been pushed.

ghost commented 5 years ago

The TensorFlowLiteSwift and TensorFlowLiteObjC pods are now available!

Please note that these libraries are still under "experimental," so the APIs may change.

Known issue: installing the ObjC or Swift pod into your project for the first time may take a bit longer than normal as CocoaPods has to clone the entire TensorFlow git repo so that it can grab the source_files that are defined in each podspec.

If you run into any other issues or would like to provide general feedback, please use the comp:lite label.

Thank you!

jpangburn commented 5 years ago

Congrats :-) I tested the publicly available TensorFlowLiteSwift pod just now with the SpeechCommands example project that I converted to Swift and it worked great. Looks like you got that bitcode performance problem resolved too! Runs at the same speed now whether or not bitcode is enabled.

ghost commented 5 years ago

@jpangburn thank you for testing and verifying!