mlverse / torch

R Interface to Torch
https://torch.mlverse.org
Other
483 stars 66 forks source link

dimension of torch tensor drops to zero when selecting a single element of a torch tensor #1164

Open gavril0 opened 1 month ago

gavril0 commented 1 month ago

Selecting a single element of a tensor yields a tensor of size and dimension 0!

Let's define a torch tensor (size does not matter):

> x <- torch_tensor(1)
> x
torch_tensor
 1
[ CPUFloatType{1} ]

and lets select a single element

> x[1]
torch_tensor
 1
[ CPUFloatType{} ]

The size is 0 and the dimension is 0:

> x[1]$size()
integer(0)
> x[1]$dim()
[1] 0

It is not consistent with length

> length(x[1])
[1] 1

It is also not consistent with the dimension of the tensor obtained when creating a tensor with a single element (see above).

This always happens unless one uses drop=FALSE

 > x[1,drop=FALSE]
torch_tensor
 1
[ CPUFloatType{1} ]

Should really the dimension really drop to zero in this case?

What are the rules for 0d and 1d tensors in R torch?

dfalbel commented 1 month ago

Unlike R, torch has support for scalar values (0d tensors). This differentation is required in some specific situations in torch.

Torch scalar tensors don't have dimensions, by definition, thus it's consistent that:

It's also consistent witth length() which is an equivalent to $numel(), the number of elements in the tensor.

Since R doesn't really have scalar values IMO it's consistent that if 1 = c(1), torch_tensor(1) = torch_tensor(c(1)) even though I recognize this is debatable.

The default value of drop is TRUE, and torch can still drop one dimension, so we drop it. I'm up for discussing if this the best behavior. FWIW there are many other functions in torch that return scalar values, such as torch_max(), torch_mean(), etc.

Do you have an specific use case where returning scalar is causing problems to you?

gavril0 commented 1 month ago

The use case is when using index in a multi-dimensional tensor without drop=FALSE. The dimension of the resulting tensor will depend on whether the index selects one or more elements.

I stumbled on this when implementing a seq2seq model with attention in R, which was inspired from pythong tutorial. In my implementation of the decoder (you can see the code here), I am using an index (b_index) that selects sentences in the batch that have not yet terminated. When computing the loss, input should be a 2d tensor and target a 1d tensor:

  loss <- nnf_nll_loss(input=state$prob$squeeze(2), target=target_padded[b_index,i]$view(length(b_index)), 
      reduction=loss_reduction) # ignore_index = padding

In the current implementation, state$prob is a 3d tensor (b_index_size, 1, output_size) that contains the probability distribution over target tokens for the current decoding step; target_padded is a 2d tensor (batch_size, max_target_len) that contains the tokens for the target sentences; target_padded[b_index,i] selects tokens for unterminated sentences for the current decoding step (i goes from 1 to max_target_len). It is a 1d tensor because the second dimension as expected. The problem occurs when there is a single element in b_index, in which case the dimension drops to 0. It is not a huge issue because I can always reshape the vector with view for example (I cannot use drop=FALSE because this would prevent dropping to 1d for the 2nd dimension).

Actually, I am not sure what is the best behaviour. One should expect that a drop of dimension(s) when one selects a unique element in a multi-dimensional array. I guess the issue is whether to stop dropping dimension(s) at 1d array for consistency with R or if we should follow Python. What is weird is that x <- torch_tensor(1) is 1d but x[1] is 0d. The R interface to torch 0d tensors / arrays scalar is not clear (what is the standard way to create a 0d torch tensor from R?) and the problem is compounded by the fact that scalars and 0d tensors don't exist in R.