Closed williamFalcon closed 6 years ago
@koz4k
basically something like (pseudocode):
# batch = 7, steps=12, dim=100
x = np.shape(7, 12, 100)
# Linear way
out_linear = x.view(-1, 100)
out_linear = linear(out_linear) # say fc maps from 100 -> 200 in dim
out_linear = x.view(7, 12, 200)
# Conv way
conv_out = x.permute(0, 2, 1)
conv_out = conv1d(conv_out)
conv_out = x.permute(0, 2, 1)
Yes, should work this way. permute
is there so that it works with Conv1d
.
@koz4k ok awesome... thanks!
In the FrameLevel forward you guys do:
are the comments I added correct?
I'd like to just use the Linear layer instead of the Conv1d first for understanding purposes. However, the dimensions don't line up when I do it that way. Any thoughts on how to reframe this in terms of a Linear layer?
I assume the transposes you do are so that the convolutions work out? is that standard when using Conv1d instead of Linear layer?