Open ktrw1011 opened 4 years ago
Fixed Encoder Self-Attention Patterns in Transformer-Based Machine Translation https://docs.google.com/presentation/d/1G1a00E8CRORi0FxDf_KDfaSg6nCOqNUftD6mfhZAkdQ/edit?usp=sharing
seq2seq/Attention Why-What! https://docs.google.com/presentation/d/15OCGUn4g_mc2LfRwVsNEWRJHDQWNcqxuEENJy1v09bU/edit?usp=sharing
過去回アーカイブ