Closed NorbertZheng closed 1 year ago
Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs), ELU, by Johannes Kepler University, 2016 ICLR, Over 5000 Citations. Image Classification, Autoencoder, Activation Function, ReLU, Leaky ReLU.
The rectified linear unit (ReLU), the leaky ReLU (LReLU, α= 0.1), the shifted ReLUs (SReLUs), and the exponential linear unit (ELU, α = 1.0).
The ELU hyperparameter $\alpha$ controls the value to which an ELU saturates for negative net inputs:
(a): median of the average unit activation for different activation functions. (b): Training cross entropy loss.
ELUs stay have smaller median throughout the training process. The training error of ELU networks decreases much more rapidly than for the other networks.
Autoencoder training on MNIST: Reconstruction error for the test and training data set over epochs, using different activation functions and learning rates.
ELUs outperform the competing activation functions in terms of training / test set reconstruction error for all learning rates.
Comparison of ELU networks and other CNNs on CIFAR-10 and CIFAR-100.
ELU-networks are the second best on CIFAR-10 with a test error of 6.55% but still they are among the top 10 best results reported for CIFAR-10. ELU networks performed best on CIFAR-100 with a test error of 24.28%. This is the best published result on CIFAR-100.
Sik-Ho Tang. Brief Review — Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs).