"We first devise a Deep Auto-Encoder (DAE) to discover hierarchical non-linear functional relations among regions, by which we transform the regional features into an embedding space, whose bases are complex functional networks"
3mm isotropic MNI space data, mapped to 116 ROIs (AAL) ... so, 2D data
They use their autoencoder similarly to how we plan to; train as an AE then use the encoder for dimensionality reduction
taught me about the principles behind independent Restricted Boltzman Machines, explaining their efficacy for modelling networks, and how when chaining them together you get an autoencoder. Very nice explanation!
They pre-train an encoder with greedy optimization on each layer, then unfold the network and initialize the decoder with the same weights, and finally tune the entire network with backprop
URL: https://www.sciencedirect.com/science/article/pii/S1053811916000100
This paper does not...
Additional Notes?
Further Reading