Open erksch opened 3 years ago
I'm not knowledgeable around iOS work, and i'm not working anymore on this project, so i cant really give a definitive answer, i dont even know what is cocoapods. But i think having everything in the main repo is best.
However, as you can see and as ive replied elsewhere, we moved ci out of taskcluster to github actions, and the iOS part is still to be done.
Part of the move is to make it easier for contributors, so we hope it would make you able to send pr for this 😉
@erksch Can you share your deepspeech_ios.framework? Task Cluster seems to be gone and I'm not sure how to set up my own.
@zaptrem wrote to you on Telegram
Hey there!
We integrate DeepSpeech for iOS projects via private CocoaPods right now. We do it by hosting the
deepspeech_ios.framework
together with the swift client source code in a private repository as a pod and register it in a private specs repo.Because some people have been asking for an official DeepSpeech CocoaPod, I thought I'd shared what options I see for doing that and would ask for more options or discussion what to do exactly.
Our
.podspec
file looks like this:(no support for different architectures in there)
As you can see we host all artifacts in a separate repository so they can be accessed immediately when loading the pod. But this may be unpractical for an official pod because you don't want to host artifacts in the main
mozilla/DeepSpeech
repo nor create a different repo just for artifacts.I see the following approaches:
Use the main DeepSpeech repo as
source
and add aprepare_command
in the podspec that builds the framework. For example like here. (This might be too slow.)Create a new repository that only contains the source code and framework that is used as
source
for the pod.Host the artifacts (framework and swift source code) in GitHub releases or anywhere else as zip and use that as
source
of the pod.Host the framework in GitHub releases or anywhere else, create a pod for that, and create a second pod for the source code that is consumed directly from DeepSpeech's main repo. (there seems to be no way to have multiple sources for one pod)
I think option 3 is the best, it is also what for example LibTorch is doing. It makes the Pod easiest to use and zipping some files and uploading them somewhere should be doable.
What are your thoughts on this?