greenelab / deep-review

A collaboratively written review paper on deep learning, genomics, and precision medicine
https://greenelab.github.io/deep-review/
Other
1.25k stars 270 forks source link

Chemception: A Deep Neural Network with Minimal Chemistry Knowledge Matches the Performance of Expert-developed QSAR/QSPR Models #555

Closed agitter closed 6 years ago

agitter commented 7 years ago

https://arxiv.org/abs/1706.06689

In the last few years, we have seen the transformative impact of deep learning in many applications, particularly in speech recognition and computer vision. Inspired by Google's Inception-ResNet deep convolutional neural network (CNN) for image classification, we have developed "Chemception", a deep CNN for the prediction of chemical properties, using just the images of 2D drawings of molecules. We develop Chemception without providing any additional explicit chemistry knowledge, such as basic concepts like periodicity, or advanced features like molecular descriptors and fingerprints. We then show how Chemception can serve as a general-purpose neural network architecture for predicting toxicity, activity, and solvation properties when trained on a modest database of 600 to 40,000 compounds. When compared to multi-layer perceptron (MLP) deep neural networks trained with ECFP fingerprints, Chemception slightly outperforms in activity and solvation prediction and slightly underperforms in toxicity prediction. Having matched the performance of expert-developed QSAR/QSPR deep learning models, our work demonstrates the plausibility of using deep neural networks to assist in computational chemistry research, where the feature engineering process is performed primarily by a deep learning algorithm.

Upon a quick read, they don't seem to start with a pre-trained Inception model. That was a little surprising given how few training instances they have for some of the tasks. I'd have to look carefully to see if the evaluation strategies are directly comparable, but #538 may report better performance on Tox21.

mrwns commented 7 years ago

They use data augmentation (flipped/rotated images). I am a bit skeptical about the "no additional explicit chemistry knowledge" claim, because the engines to render such images contain a lot of domain knowledge (bond lengths, symbols, angles, overlap detection ...). Nevertheless, it's an interesting (and funny) study, which again highlights the power of conv nets to extract features from visual input.

agitter commented 6 years ago

Closed by #774