wxjiao / AGHMN

Implementation of the paper "Real-Time Emotion Recognition via Attention Gated Hierarchical Memory Network" in AAAI-2020.
29 stars 3 forks source link

Using AGHMN model for Korean dataset #4

Open so-hyeun opened 3 years ago

so-hyeun commented 3 years ago

Hello, I want to use this model to do emotion recognition in conversation on the Korean dataset.

If I use word embeddings pre-trained in Korean, can I apply the Korean dataset to this model?

wxjiao commented 3 years ago

Hello, I want to use this model to do emotion recognition in conversation on the Korean dataset.

If I use word embeddings pre-trained in Korean, can I apply the Korean dataset to this model?

Hi! Yes, the model can be applied to any dataset. But, you may write the function by yourself for assigning the pre-trained embeddings to your vocabulary as the format of your embeddings may be different from that of Word2Vec and GloVe. Actually, the formats of Word2Vec and GloVe are different such that we have different functions for loading them. Just modify some part of the functions and it should work. Thanks!

so-hyeun commented 3 years ago

Thanks for the quick reply.

so-hyeun commented 3 years ago

I have one question. Currently, batch_size is set to 1 in default. Is it possible to train with increased batch_size? Since batch_size is defined in' EmoMain.py', it doesn't seem to be used in other parts of the code.

Thank you.

wxjiao commented 3 years ago

I have one question. Currently, batch_size is set to 1 in default. Is it possible to train with increased batch_size? Since batch_size is defined in' EmoMain.py', it doesn't seem to be used in other parts of the code.

Thank you.

Hi. The batch_size is 1 for conversation but with multiple utterances and the corresponding labels. But, you can use a larger batch by accumulating the gradient for "batch_size" conversations before updating the weights.