arrayfire / arrayfire-ml

ArrayFire's Machine Learning Library.
BSD 3-Clause "New" or "Revised" License
102 stars 23 forks source link

RNN Models #20

Open jramapuram opened 9 years ago

jramapuram commented 9 years ago

Once we have an implementation of the Layer Class https://github.com/arrayfire/arrayfire_ml/issues/17 , the Optimizer class and the DataSet class we can go about creating RNN flavors. There are 3 models that should be implemented:

These will require the implementation of their derivatives and their forward prop values. Certain details to consider:

To enable the above two methods of learning we should consider inheriting from Layer and implementing a Recurrent Layer.

sherjilozair commented 9 years ago

BPTT is much more popularly used than RTRL, so might be better to prioritize BPTT over RTRL.

jramapuram commented 9 years ago

BPTT doesn't solve the stateful RNN problem unfortunately. Truncated BPTT is a crude approximation as well as i Also, there are a plethora of BPTT solutions already: keras, blocks, lasagne, chainer, ...

The only way to have an online RNN is to use RTRL as each step utilizes the full jacobian product from the previous time step as this allows for smooth information flow.

sherjilozair commented 9 years ago

Pretty much all other RNN implementations are slow, and not suitable for production. My own reason for betting on arrayfire is that this might yield production-ready implementations for deep learning algorithms.

jramapuram commented 7 years ago

I will be doing an internship till Sept and won't have time to update till then most likely. If someone wants to implement these first that would be great now that we have AD setup.

WilliamTambellini commented 7 years ago

Hi, I would be interested by either LSTM or GRU, forward pass would be a good first step before implementing backward/training.

pavanky commented 7 years ago

@jramapuram I am going to try and implement this. May be you and @WilliamTambellini review this once I send a PR.

WilliamTambellini commented 7 years ago

Hi @pavanky it sounds good. GRU is usually a little simpler to implement than LSTM but your choice. Have you thought about a possible example application (text generation, summarization, translation, question&answer, ...) ? Cheers W.

jramapuram commented 7 years ago

I suggest a simple char-run type problem

pavanky commented 7 years ago

@jramapuram @WilliamTambellini If you have specific examples in mind please let me know. Preferably implemented as an example in another ML toolkit already :)

pavanky commented 7 years ago

@jramapuram @WilliamTambellini I think I am going to target this example as a first step: https://github.com/pytorch/examples/tree/master/word_language_model

WilliamTambellini commented 7 years ago

Hi @pavanky It sounds very good: the Penn db is quite small (about 5M) and training time should nt be long. Perfect for an example. Have you opted between Elman, GRU, or LSTM ?

pavanky commented 7 years ago

@WilliamTambellini Will start with plain (Elman) RNNs first.