Closed gunawanlg closed 4 years ago
Add attention layer. Add biLSTM layer after attention.
Train with nohup configuration on google colab GPU.
Closing this issue as transfer learning #46 already proved decent results. New issue #70 is created to further enhanced the result by using the state of the art (2019) in ASR.
Add attention layer. Add biLSTM layer after attention.
Train with nohup configuration on google colab GPU.