The biggest one has to do with what we do when our cost function has contributions that are tied to intermediary (and only intermediary) layers/values. In other words, is there support for adding intermediate objectives independent of supervised truth data?
To illustrate, when an output is supervised with some set of truth (regression or classification), we can pass truth into the fitting procedure like so:
When we are doing an autoregressive reconstruction, (I think) we can do:
model.fit({'input':X_train},{'L2_rec':X_train})
And then comes intermediary loss function contributions that don't rely on any supervising data outside of the input image... I have hacked things along the lines of the Siamese Example by defining the following:
In other words, I am just taking the mean of the L2 norms of the difference between the encoder and decoder intermediary layers. It works, but it's kind of ugly. Is there support for adding intermediate objectives independent of supervised truth data?
I uncovered this same issue recently. Have you had any more thoughts about it? I was able to convert the model to tensorflow without much work. Perhaps we can swap notes.
Got a prelim SWWAE going in the most recent release (1.0.1), but facing a few issues.
Paper: http://arxiv.org/abs/1506.02351
The biggest one has to do with what we do when our cost function has contributions that are tied to intermediary (and only intermediary) layers/values. In other words, is there support for adding intermediate objectives independent of supervised truth data?
To illustrate, when an output is supervised with some set of truth (regression or classification), we can pass truth into the fitting procedure like so:
model.fit({'input':X_train},{'output_supervised':Y_train})
When we are doing an autoregressive reconstruction, (I think) we can do:
model.fit({'input':X_train},{'L2_rec':X_train})
And then comes intermediary loss function contributions that don't rely on any supervising data outside of the input image... I have hacked things along the lines of the Siamese Example by defining the following:
...
L2_intermediary = Lambda(euclidean_distance, output_shape=eucl_dist_output_shape)([encoder_midwaydown,decoder_midwayup])
...In other words, I am just taking the mean of the L2 norms of the difference between the encoder and decoder intermediary layers. It works, but it's kind of ugly. Is there support for adding intermediate objectives independent of supervised truth data?