Closed wuchlei closed 5 years ago
Not only the sequence length. Even adding with sequence length, this code makes all cells at the same depth layer not sharing weights parameters....
I have the same question: Is this implementation a stacked convgru? so the n_layer is actually the number of stacked cells on top of each other? Then how are u taking the sequence length into account? I only see one for loop for n_layers so i'm curious to know. Cheers
Is this implementation a stacked convgru? so the n_layer is actually the number of stacked cells on top of each other?
Yes.
Then how are u taking the sequence length into account? I only see one for loop for n_layers so i'm curious to know.
I pass in elements of the sequence in the training loop. Something like
model = ConvGRU()
# Batch, Time, Channels, Height, Width
x = torch.FloatTensor(1, 3, 8, 64, 64)
outputs = []
for t in range(x.size(1)):
out = model(x[:, t, :, :, :])
outputs.append(out)
The input is actually a sequence of items. So it seems that your implementation isn't right?