issues
search
ExplorerFreda
/
Structured-Self-Attentive-Sentence-Embedding
An open-source implementation of the paper ``A Structured Self-Attentive Sentence Embedding'' (Lin et al., ICLR 2017).
GNU General Public License v3.0
432
stars
97
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
This is Weired
#12
YooSungHyun
closed
2 years ago
0
Why is attention applied on the outputs instead of hidden states?
#11
prerit2010
opened
5 years ago
1
Model in Figure 1
#10
vhientran
opened
5 years ago
0
pretrain model
#9
FuyuWang
opened
5 years ago
0
bool value of Tensor with more than one value is ambiguous
#8
Henry-Jia
closed
5 years ago
1
global pooling layer
#7
pinkfloyd06
opened
6 years ago
0
About the dimension of input to bilstm
#6
kenchan0226
closed
6 years ago
2
About GLOVE model
#5
jx00109
opened
6 years ago
1
Penalty Term Frobenius Norm Squared
#4
Shuailong
opened
7 years ago
6
word vectors, visualizing attention
#3
andreasvc
opened
7 years ago
2
from models import *
#2
cclauss
closed
7 years ago
1
from __future__ import print_function for Python 3
#1
cclauss
closed
7 years ago
0