Closed tik0 closed 5 years ago
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
@tik0 Have you found out anything on this one? Have you been able to use AutoKeras to create an Autoencoder or did you use something different?
I was trying to create a LSTM Autoencoder, but couldn't realize how to do this. Do you have a clue now?
@daviembrito I ended up using a generic hyperparameter optimizer with a simple, handcrafted, chain-structured search space. It worked reasonably well in my use case, but it also introduces a large human bias with quite a limited search space of possible network architectures.
You can have a look at it here:
https://github.com/maechler/a2e https://github.com/maechler/a2e/blob/master/experiments/automl/deep_easing_feed_forward_dropout.py https://github.com/maechler/a2e/blob/master/a2e/model/keras/_feed_forward.py#L111 https://github.com/maechler/a2e/blob/master/a2e/model/keras/_lstm.py#L6
Feature Description
Training of predefined models with constraints (e.g. bottleneck layers) and additional losses.
Reason
It is unclear how to use autokeras on models with particular constraints, like Autencoders or Bottleneck-Networks. Furthermore, the Variational Autoencoder has additional regularizer and sampling layer.
Solution
Some API like
define model constraints first
Alternative Solutions
-
Additional Context
-