memect / hao

好东西传送门
1.4k stars 463 forks source link

@timedcy @好东西传送门 除了这些事先设定好结构参数的深层神经网络外,有什么网络可以在训练时自动寻找最优结构并得到参数的呢? #273

Closed haoawesome closed 9 years ago

haoawesome commented 9 years ago

评论

http://www.weibo.com/5220650532/BrLoXAaeH

haoawesome commented 9 years ago

概念

http://en.wikipedia.org/wiki/Deep_learning#Deep_neural_networks A deep neural network (DNN) is defined[by whom?] to be an artificial neural network with at least one hidden layer of units between the input and output layers.[dubious – discuss] Similar to shallow ANNs, it can model complex non-linear relationships. The extra layers give it added levels of abstraction, thus increasing its modeling capability. DNNs are typically designed as feedforward networks, but recent research has successfully applied the deep learning architecture to recurrent neural networks for applications such as language modeling.

http://en.wikipedia.org/wiki/Convolutional_neural_network In computer science, a convolutional neural network is a type of feed-forward artificial neural network where the individual neurons are tiled in such a way that they respond to overlapping regions in the visual field.[1] Convolutional networks were inspired by biological processes[2] and are variations of multilayer perceptrons which are designed to use minimal amounts of preprocessing.[3] They are widely used models for image recognition.[4]

haoawesome commented 9 years ago

http://www.sciencedirect.com/science/article/pii/S0307904X99000207

Applied Mathematical Modelling Volume 23, Issue 12, December 1999, Pages 933–944

Automatic structure and parameter training methods for modeling of mechanical systems by recurrent neural networks C. James Li, Tung-Yung Huang

Automatic nonlinear-system identification is very useful for various disciplines including, e.g., automatic control, mechanical diagnostics and financial market prediction. This paper describes a fully automatic structural and weight learning method for recurrent neural networks (RNN). The basic idea is training with residuals, i.e., a single hidden neuron RNN is trained to track the residuals of an existing network before it is augmented to the existing network to form a larger and, hopefully, better network. The network continues to grow until either a desired level of accuracy or a preset maximal number of neurons is reached. The method requires no guessing of initial weight values or the number of neurons in the hidden layer from users. This new structural and weight learning algorithm is used to find RNN models for a two-degree-of-freedom planar robot, a Van der Pol oscillator and a Mackey–Glass equation using their simulated responses to excitations. The algorithm is able to find good RNN models in all three cases.

haoawesome commented 9 years ago

http://www.iro.umontreal.ca/~bengioy/papers/ftml_book.pdf Learning Deep Architectures for AI Yoshua Bengio

haoawesome commented 9 years ago

http://ai.stanford.edu/~ang/papers/icml11-OptimizationForDeepLearning.pdf

On Optimization Methods for Deep Learning ICML'11

haoawesome commented 9 years ago

https://www.cs.bham.ac.uk/~xin/papers/KhareYaoCEC05.pdf Co-evolutionary Modular Neural Networks for Automatic Problem Decomposition