getkeops / keops

KErnel OPerationS, on CPUs and GPUs, with autodiff and without memory overflows
https://www.kernel-operations.io
MIT License
1.04k stars 64 forks source link

`LogSumExp` reduction gives different result dimension with and without weight #191

Closed chloesrcb closed 2 years ago

chloesrcb commented 3 years ago

Why is the logsumexp reduction in pyKeOps giving results of different dimension with and without weight? Below is an example with weights equal to 1: we expect the same result but it is not the case.

x = np.array([[2.5, 1.5, 3.5], [.5, 2., 4.7]])
y = np.array([[2., 6., 9.], [3., 0.6, 5.]])
w = np.array([[1., 1., 1.], [1., 1., 1.]])

# LazyTensors
x_i = LazyTensor_np(x[:, None, :])  
y_j = LazyTensor_np(y[None, :, :]) 
w_j = LazyTensor_np(w[None, :, :])

V_ij = (x_i - y_j)**2
S_ij = V_ij.sum()

# without weight
print(S_ij.logsumexp(0))
# [[50.75000082]
#  [ 8.30678261]]

# with weights all equal to 1
print(S_ij.logsumexp(0, weight=w_j))
# [[50.75000082 50.75000082 50.75000082]
#  [ 8.30678261  8.30678261  8.30678261]]

@AmelieVernay and @chloesrcb

joanglaunes commented 2 years ago

Hello @chloesrcb and @AmelieVernay , This is normal behaviour in fact, because in your example w_j is 3 dimensional, whereas S_ij=||x_i-y_j||^2 is scalar. So :

joanglaunes commented 2 years ago

Here is the updated script with scalar weights equal to 1, giving the same results :

x = np.array([[2.5, 1.5, 3.5], [.5, 2., 4.7]])
y = np.array([[2., 6., 9.], [3., 0.6, 5.]])
w = np.array([[1.], [1.]])

# LazyTensors
x_i = LazyTensor_np(x[:, None, :])  
y_j = LazyTensor_np(y[None, :, :]) 
w_j = LazyTensor_np(w[None, :, :])

V_ij = (x_i - y_j)**2
S_ij = V_ij.sum()

# without weight
print(S_ij.logsumexp(0))
# [[50.75000082]
#  [ 8.30678261]]

# with weights all equal to 1
print(S_ij.logsumexp(0, weight=w_j))
# [[50.75000082]
#  [ 8.30678261]]