datamix-seminar / nlp-seminar

0 stars 0 forks source link

【Archive】 #1

Open ktrw1011 opened 4 years ago

ktrw1011 commented 4 years ago

過去回アーカイブ

ktrw1011 commented 4 years ago

Fixed Encoder Self-Attention Patterns in Transformer-Based Machine Translation https://docs.google.com/presentation/d/1G1a00E8CRORi0FxDf_KDfaSg6nCOqNUftD6mfhZAkdQ/edit?usp=sharing

riodeja5 commented 4 years ago

seq2seq/Attention Why-What! https://docs.google.com/presentation/d/15OCGUn4g_mc2LfRwVsNEWRJHDQWNcqxuEENJy1v09bU/edit?usp=sharing