greenelab / deep-review

A collaboratively written review paper on deep learning, genomics, and precision medicine
https://greenelab.github.io/deep-review/
Other
1.25k stars 271 forks source link

DIET NETWORKS: THIN PARAMETERS FOR FAT GENOMICS #140

Open brettbj opened 7 years ago

brettbj commented 7 years ago

http://openreview.net/pdf?id=Sk-oDY9ge

Learning tasks such as those involving genomic data often poses a serious challenge: the number of input features can be orders of magnitude larger than the number of training examples, making it difficult to avoid overfitting, even when using the known regularization techniques. We focus here on tasks in which the input is a description of the genetic variation specific to a patient, the single nucleotide polymorphisms (SNPs), yielding millions of ternary inputs. Improving the ability of deep learning to handle such datasets could have an important impact in medical research, more specifically in precision medicine, where highdimensional data regarding a particular patient is used to make predictions of interest. Even though the amount of data for such tasks is increasing, this mismatch between the number of examples and the number of inputs remains a concern. Naive implementations of classifier neural networks involve a huge number of free parameters in their first layer (number of input features times number of hidden units): each input feature is associated with as many parameters as there are hidden units. We propose a novel neural network parametrization which considerably reduces the number of free parameters. It is based on the idea that we can first learn or provide a distributed representation for each input feature (e.g. for each position in the genome where variations are observed in data), and then learn (with another neural network called the parameter prediction network) how to map a feature’s distributed representation (based on the feature’s identity not its value) to the vector of parameters specific to that feature in the classifier neural network (the weights which link the value of the feature to each of the hidden units). This approach views the problem of producing the parameters associated with each feature as a multi-task learning problem. We show experimentally on a population stratification task of interest to medical studies that the proposed approach can significantly reduce both the number of parameters and the error rate of the classifier.

@agitter - an attempt at solving some of the wide data issues

agitter commented 7 years ago

@brettbj Very interesting, this is definitely something to present. Which section do you envision for this paper? It could go in the Discussion sub-section about the wide matrix problem or be introduced in Categorize because their specific task is ancestry prediction.

I don't see any standard identifiers for this paper so we'll need a custom reference.

traversc commented 7 years ago

This is an interesting paper that reminds me (tangentially) of another paper, won by Microsoft 2015 in an image net competition, using "residual neural networks": https://arxiv.org/abs/1512.03385

I don't quite understand the "diet networks" yet, but the idea seems to be to augment of the standard fully connected neural network in order to "ground" the model to something that is already well known.

The authors also compared a model built from PCA reduced data. However, I don't think this is a fair comparison, since the data they are using is not normally distributed.

They used a linear model built on 100 principal components, and showed that it only performed 3% worse. How about other neural network approaches, or models built with more principal components?

brettbj commented 7 years ago

@traversc I've read it a few times but I'm really waiting for the source code to be released, I don't think I could re-implement from the paper alone.