I can't understand this graph?
so the sequence in fist input layer is "Anh roi EU" with each word will put to word embedding layer then fit to LSTM? or word "Anh" transfer to [Anh, pos_tagging, chunk_tagging, regex_tagging] will put to word embedding layer then fit to LSTM, word "roi" and, word "EU" support to change parameter of model?
Can you answer and explain more detail?
I think it's quite straightforward if you take a look at his source code in create_vector_data(utils.py).
He extended word embedding by 1-hot pos and chunk data.
I can't understand this graph? so the sequence in fist input layer is "Anh roi EU" with each word will put to word embedding layer then fit to LSTM? or word "Anh" transfer to [Anh, pos_tagging, chunk_tagging, regex_tagging] will put to word embedding layer then fit to LSTM, word "roi" and, word "EU" support to change parameter of model? Can you answer and explain more detail?