Closed vineetjohn closed 5 years ago
The current implementation is pretty rigid in it's requirement that there be a sentence fed to the model to transform, as opposed to simply sampling and generating from latent space.
This change might need a whole lot of re-structuring in the model class. https://github.com/vineetjohn/linguistic-style-transfer/blob/master/linguistic_style_transfer_model/models/adversarial_autoencoder.py
During the initial stab at the implementation, I've noticed that using both negative or positive style embeddings and using a sampled content vector yields mostly positive sentence in generation mode.
During the initial stab at the implementation, I've noticed that using both negative or positive style embeddings and using a sampled content vector yields mostly positive sentence in generation mode.
Turns out this was due to a coding error and I was overriding the conditioning style embedding during generation.
For posterity, verified that the normal inference mode of transforming sentence labels are working fine.
Tested style accuracy of generated sentences. It's 0.987