A simple autoencoder module for experimentation and dimensionality reduction. Supports automatic scaling
Install from npm:
npm install autoencoder
Then load with require
:
const Autoencoder = require('autoencoder')
Autoencoder
supports two ways of a model initialization:
const ae = new Autoencoder({
'nInputs': 10,
'nHidden': 2,
'nLayers': 2, // (default 2) - number of layers in each encoder/decoder
'activation': 'relu' // (default 'relu') - applied to all, but the last layer
})
const ae = new Autoencoder({
'encoder': [
{'nOut': 10, 'activation': 'tanh'},
{'nOut': 2, 'activation': 'tanh'}
],
'decoder': [
{'nOut': 2, 'activation': 'tanh'},
{'nOut': 10}
]
})
Activation functions: relu
, tanh
, sigmoid
As other neural nets, autoencoder is very sensitive to input scaling. To make it easier the scaling is enabled by default, you can control it with an extra parameter scale
that takes true
or false
ae.fit(X, {
'batchSize': 100,
'iterations': 5000,
'method': 'adagrad', // (default 'adagrad')
'stepSize': 0.01,
})
Optimization methods: sgd
, adagrad
, adam
const Y = ae.encode(X)
const Xd = ae.decode(Y)
const Xp = ae.predict(X) // Similar to ae.decode(ae.encode(X))
Try the package in the browser on StatSim Vis. Choose a CSV file, change a dimensionality reduction method to Autoencoder, then click Run.