choderalab / pinot

Probabilistic Inference for NOvel Therapeutics
MIT License
15 stars 2 forks source link

Sequential mix #88

Closed dnguyen1196 closed 4 years ago

dnguyen1196 commented 4 years ago

Allows for mixing different convolution layer types. Implementation can be found in pinot/representation/sequential.py. However, specifying the config is not a little more verbose and has to follow specific order as a tradeoff for this increased flexibility. The config has to be specified like this:

["LayerType1", "value_for_param1", "value_for_param2", "LayerType2", "value_for_param1", ... ]

And the order of the parameters have to follow the same order as specified in pinot/representation/sequential.layer_param_type

Users can specify config as a list of string, which makes it able to handle command-line output. The code will automatically cast the parameter values to the right type. For example

n_epochs = 50
config = ["GraphConv", "128",\  <-------------- Convolution type, output dimension
         "activation", "tanh",\          <-------------- Activation with type
          "dropout", "0.1",\              <-------------- Dropout with p
          "GATConv", "64", "3", "attention_pool", "concat", "activation", "tanh", "dropout", "0.1",\
          "GraphConv", "128", "activation", "tanh", "dropout", "0.1"] <------------ all strings, which can come from CLI argument

# Output regressor
output_regressor_type = "ExactGaussianProcessRegressor"
embedding_dim = 64
generative_hidden_dim = 128

representation = SequentialMix(config) <------------------------------------------ SequentialMix
output_regressor = getattr(pinot.regressors, output_regressor_type)

# Decoder for VAE
decoder = pinot.generative.decoder.DecoderNetwork

net = SemiSupervisedNet(
    representation=representation,
    decoder=decoder,
    output_regressor=output_regressor,
    embedding_dim=embedding_dim,
    generative_hidden_dim=generative_hidden_dim,
    unsup_scale=0, # <------ if unsup_scale = 0., reduces to supervised model
)

optimizer = torch.optim.Adam(net.parameters(), 1e-4, weight_decay=0.01)
lgtm-com[bot] commented 4 years ago

This pull request introduces 1 alert when merging 6f046e6f1b483afa2839dbaeb07f90665401bcf9 into 31fe04671132717b0fbbebabbefe07fdaed81079 - view on LGTM.com

new alerts:

lgtm-com[bot] commented 4 years ago

This pull request introduces 1 alert when merging 39519c79952649a3a7739690209133b88f67c3cc into 53cb5b2c6f6c280f2ec84d0c69065b45a4c72200 - view on LGTM.com

new alerts: