@Kismuz I have been recently engage in looking for state of the art publications in the RL field. In hope to find new interesting and promising direction to deal with long noisy stochastic time series.
One concept had caught my attention far above all other - Sparse Distributed Memory.
borrowed from neuroscience, SDM originally tried to model how the memory process in brain works. without going into too much details, the main property of this memory model is that it is associative, meaning past experience are stored/recalled based on similarities to other memories. Association is achieved by the Hamming distance between memories and is creating in a way a clusterize memory.
Other properties that are worth mentioning:
learning is achieved online (without backpropagation) and is unsupervised (form clusters)
very robust to high level of noisy
very efficient memory size
deals well with high dimensional input
To make things even more interesting, our biological neurons have a predictive state (not just active/inactive) that creates Temporal connections between associated memories.
The end result, is a well organized Clusterize Temporal Memory that shows also Hierarchical properties.
Over the last couple of years DeepMind had been researching this exact concept. A recent publication show a real breakthrough, An external predictive memory model had been presented - MERLIN #82
as far as I can tell that was the first work in Deep Learning that combine SDM based external memory with a predicting element. (technical details, experiments and comparison to naive LSTM are best explain in the paper). A cool feature about the predictive part is that after an action had been taken a retrospective update to the memory is made with the rewards. this way future predictions are made with respect to state and expected rewards
@Kismuz I have been recently engage in looking for state of the art publications in the RL field. In hope to find new interesting and promising direction to deal with long noisy stochastic time series.
One concept had caught my attention far above all other - Sparse Distributed Memory. borrowed from neuroscience, SDM originally tried to model how the memory process in brain works. without going into too much details, the main property of this memory model is that it is associative, meaning past experience are stored/recalled based on similarities to other memories. Association is achieved by the Hamming distance between memories and is creating in a way a clusterize memory. Other properties that are worth mentioning:
To make things even more interesting, our biological neurons have a predictive state (not just active/inactive) that creates Temporal connections between associated memories. The end result, is a well organized Clusterize Temporal Memory that shows also Hierarchical properties.
Over the last couple of years DeepMind had been researching this exact concept. A recent publication show a real breakthrough, An external predictive memory model had been presented - MERLIN #82 as far as I can tell that was the first work in Deep Learning that combine SDM based external memory with a predicting element. (technical details, experiments and comparison to naive LSTM are best explain in the paper). A cool feature about the predictive part is that after an action had been taken a retrospective update to the memory is made with the rewards. this way future predictions are made with respect to state and expected rewards
I listed a few of the papers that can show the evolution of research in of this field (SDM based external memory): Neural Turing Machines - NTM Hybrid computing using a neural network with dynamic external memory - DNC GENERATIVE TEMPORAL MODELS WITH MEMORY The Kanerva Machine: A Generative Distributed Memory Scaling Memory-Augmented Neural Networks with Sparse Reads and Writes Implementing Neural Turing Machines
on github those project are worth looking into: NTM DNC
IMHO it looks like a very promising research direction