Closed pranv closed 8 years ago
Thanks for your interest!
If you are creating custom layers for models in neon, then the approach is straightforward. All objects inherit from NervanaObject
which has a static be
variable for the backend.
If you are making custom implementations outside of our neon framework, we expose the backend exactly for this purpose. You can generate a backend via:
from neon.backends import gen_backend
be = gen_backend(backend='gpu')
You can specify either a CPU or a GPU backend. The backend API supports many of the standard tensor operations that you would expect. Let us know if you run into any problems or if you need more guidance.
For resources, see: Backend overview: http://neon.nervanasys.com/docs/latest/backends.html Backend API: http://neon.nervanasys.com/docs/latest/ml_operational_layer.html Op-Tree: http://neon.nervanasys.com/docs/latest/optree.html Auto-diff: http://neon.nervanasys.com/docs/latest/autodiff.html
Just to piggyback off of what Hanlin already mentioned, nervanagpu is no longer under development as that code has been integrated directly into neon.
Thanks for the quick reply. We will share our progress in the coming weeks :)
Keep us posted, and feel free to re-open if you have further questions.
Hi,
We are interested in using Nervana's tensor lib for a custom LSTM/RNN/CWRNN implementations. What is the best way to do it?
Is nervanagpu under active development? Or should we import backends from neon?