Directly related to #52 and #53. Somewhat related to #45, #55, and #56. This preprint examines an autoencoder-based method for representing molecules with continuous values.
Abstract:
We report a method to convert discrete representations of molecules to and from a multidimensional continuous representation. This generative model allows efficient search and optimization through open-ended spaces of chemical compounds. We train deep neural networks on hundreds of thousands of existing chemical structures to construct two coupled functions: an encoder and a decoder. The encoder converts the discrete representation of a molecule into a real-valued continuous vector, and the decoder converts these continuous vectors back to the discrete representation from this latent space. Continuous representations allow us to automatically generate novel chemical structures by performing simple operations in the latent space, such as decoding random vectors, perturbing known chemical structures, or interpolating between molecules. Continuous representations also allow the use of powerful gradient-based optimization to efficiently guide the search for optimized functional compounds. We demonstrate our method in the design of drug-like molecules as well as organic light-emitting diodes.
This paper not specifically related to biomedicine but is relevant to the virtual screening (#45) papers
The main objective is to generate a continuous representation of a chemical structure, which is inherently discrete
Common discrete chemical representations include fingerprints (bit vectors) or the SMILES text encoding, and recent work uses neural network graph convolutions (#52, #53) to generate continuous representations
The approach here applies a variational autoencoder to encode compounds into a continuous space and decode them back into discrete space (specifically, SMILES)
In the continuous space, it is easier to sample new compounds or optimize chemical properties such as drug-like properties or fluorescence and related properties that are relevant for OLEDs
The decoded chemicals are not always valid molecules
Computational aspects
The input is the SMILES representation of a chemical, which is a text encoding with 35 characters (including spaces) with maximum length of 120 (including padding)
The high-level network architecture is SMILES text -> encoder -> continuous representation -> decoder -> SMILES text
Independent of the autoencoder, they train sparse Gaussian process models to predict chemical properties in the new continuous feature space
Use Keras, but no code
Why include it in the review
This is a good example of using autoencoders for representation learning
Other notes
There is a balanced discussion of the limitations of the current version:
The current version only trains on 100,000 to 250,000 compounds, and they are working on a distributed GPU implementation that can train on 100 million compounds
Training on many more compounds will improve the quality of the latent feature space
They acknowledge that training the autoencoder separately from the supervised model for the chemical properties could result in a suboptimal latent representation
https://arxiv.org/abs/1610.02415v1
Directly related to #52 and #53. Somewhat related to #45, #55, and #56. This preprint examines an autoencoder-based method for representing molecules with continuous values.
Abstract: