Closed herleeyandi closed 6 years ago
Hi, length
should be a variable containing the length of each sequence in a batch.
Note that the line 262 is equivalent to the following statement:
torch.lt(time, length).float().unsqueeze(1).expand_as(h_next)
.
The last expand seems to be unnecessary since recent versions of pytorch support broadcasting though.
The time is scalar variable? Here is my debug result. It still give me that error. I am using pytorch v0.2, do you know how to do it in pytorch v0.2?
Here is result when I am using torch.lt(time, length).float().unsqueeze(1).expand_as(h_next).
Why is the length
a tuple of multiple variables?
It should be 1-D integer variable containing the lengths.
In pytorch v0.2, I get the following result:
In [19]: length = Variable(torch.LongTensor([2, 3, 4]))
In [20]: time = 3
In [21]: mask = (time < length).float().unsqueeze(1)
In [22]: mask
Out[22]:
Variable containing:
0
0
1
[torch.FloatTensor of size 3x1]
And the torch.lt
statement seems to emit an error; my oversight.
Oh my bad, I forget to comment 1 line in my code. Thank you so much. 1 star for this repository.
I have got a similar problem. May I suggest to change the name of the variable "length"? By itself, to me, it is not very clear that is an array with the maximum length of each string. Or we might just add a comment. Anyway, I enjoyed your implementation! :)
What do you mean in line 262
mask = (time < length).float().unsqueeze(1).expand_as(h_next)
I just got errorAttributeError: 'bool' object has no attribute 'float'
since(time < length)
returnsTrue