Closed ralphtheninja closed 6 years ago
We could also add prebuild-ci which allows binaries to be built automatically by .travis (and/or appveyor). This is the way use we use prebuild/prebuild-ci for projects like leveldown.
Yes, I'd definitely like to improve the current status here. Some thoughts...
I don't think a pre-build step would work well, since it takes quite some while to build TensorFlow.
Instead, I'd like to automate pulling these down during an npm install from some previously built and release location (ideally I'd like the TensorFlow team to release these binaries alongside the python releases, so they are official and available for every release version - hope to followup with some engineers on the TensorFlow team in next few days). Is there an established practice for pulling down binaries (where you can pick the binaries depending on OS, and user specified options like TF version, GPU?
Aah I misunderstood. I thought this package was using node-gyp
but I see now that ffi
is being used instead. Cool!
For leveldown
we use prebuild-install
which pulls down prebuilt node addons based on os, architecture etc. But that is made for working with prebuilt node addons.
Instead, I'd like to automate pulling these down during an npm install from some previously built and release location (ideally I'd like the TensorFlow team to release these binaries alongside the python releases, so they are official and available for every release version - hope to followup with some engineers on the TensorFlow team in next few days). Is there an established practice for pulling down binaries (where you can pick the binaries depending on OS, and user specified options like TF version, GPU?
How large are the TensorFlow
binaries? You could just put them inside the git repo and publish them to npm directly. I haven't used the ffi
method myself yet so not sure what the best workflow is.
Is there an established practice for pulling down binaries (where you can pick the binaries depending on OS, and user specified options like TF version, GPU?
We could hack up something like prebuild-install-ffi
which could handle this use case. An important functionality of prebuild-install
is that it caches the binaries in ~/.npm/_prebuilds
.
Scratch my previous ramblings. prebuild-install
should be able to handle the use case of downloading a binary (but we might need to tweak it a bit, i.e. not do any require test etc). I'd also recommend to store them on github and on specific releases. This would give you control of the release process instead of relying on the other team to release binaries for you. Can also cut down on unneeded communication.
The TensorFlow binaries for Ubuntu are ~60MB, and I expect different binaries for OSX and Windows. So its unlikely a wise idea to include them in the npm package.
I am hoping we can have a script for the postinstall
script which pulls down binaries ... we just need a good known location.
I have mail out to TensorFlow folks to see if they could be convinced that should maintain the matrix of libs for OS, version, GPU/non-GPU variations, so this isn't something this project has to maintain, since it applicablity is not scoped to node.js uses.
Made a fix via 20aa73f246a8e27752f97d8a3e95c67507daa17b but I need to fully verify, for which I need to publish an npm package and run through an npm install.
0.6.4 published to npm with support for downloading/install tensorflow lib binaries (for linux and mac)
E.g. via
prebuild
or similar. I could help out with setting this up if it sounds interesting. The binaries.so
etc could be committed to the git repository, but omitted from npm.