NervanaSystems / neon

Intel® Nervana™ reference deep learning framework committed to best performance on all hardware
http://neon.nervanasys.com/docs/latest
Apache License 2.0
3.87k stars 811 forks source link

Using Backend #203

Closed pranv closed 8 years ago

pranv commented 8 years ago

Hi,

We are interested in using Nervana's tensor lib for a custom LSTM/RNN/CWRNN implementations. What is the best way to do it?

Is nervanagpu under active development? Or should we import backends from neon?

hanlint commented 8 years ago

Thanks for your interest!

If you are creating custom layers for models in neon, then the approach is straightforward. All objects inherit from NervanaObject which has a static be variable for the backend.

If you are making custom implementations outside of our neon framework, we expose the backend exactly for this purpose. You can generate a backend via:

from neon.backends import gen_backend
be = gen_backend(backend='gpu') 

You can specify either a CPU or a GPU backend. The backend API supports many of the standard tensor operations that you would expect. Let us know if you run into any problems or if you need more guidance.

For resources, see: Backend overview: http://neon.nervanasys.com/docs/latest/backends.html Backend API: http://neon.nervanasys.com/docs/latest/ml_operational_layer.html Op-Tree: http://neon.nervanasys.com/docs/latest/optree.html Auto-diff: http://neon.nervanasys.com/docs/latest/autodiff.html

scttl commented 8 years ago

Just to piggyback off of what Hanlin already mentioned, nervanagpu is no longer under development as that code has been integrated directly into neon.

pranv commented 8 years ago

Thanks for the quick reply. We will share our progress in the coming weeks :)

scttl commented 8 years ago

Keep us posted, and feel free to re-open if you have further questions.