rusty1s / pytorch_sparse

PyTorch Extension Library of Optimized Autograd Sparse Matrix Operations
MIT License
1.01k stars 147 forks source link

Elementwise multiplication #343

Closed AmosDinh closed 1 month ago

AmosDinh commented 1 year ago

Hello, is there any way to do element-wise matrix multiplication with your library? Thank you very much!

rusty1s commented 1 year ago

Yes, this should work already, e.g., sparse_mat * sparse_mat or sparse_mat * dense_mat.

AmosDinh commented 1 year ago

This is great, thanks so much.

Matthias Fey @.***> schrieb am Mo., 2. Okt. 2023, 01:09:

Yes, this should work already, e.g., sparse_mat * sparse_mat or sparse_mat

  • dense_mat.

— Reply to this email directly, view it on GitHub https://github.com/rusty1s/pytorch_sparse/issues/343#issuecomment-1742575510, or unsubscribe https://github.com/notifications/unsubscribe-auth/AJQ5FRFBOK6TOIWNCNYDP5LX5JZDNANCNFSM6AAAAAA5O7XWCE . You are receiving this because you authored the thread.Message ID: @.***>

AmosDinh commented 1 year ago

What am I missing here? Does it only support 1xN matrices? Here is the error:

TypeError: SparseTensor.size() missing 1 required positional argument: 'dim'

Library Code seems to be

def mul(src: SparseTensor, other: torch.Tensor) -> SparseTensor:
    rowptr, col, value = src.csr()
    if other.size(0) == src.size(0) and other.size(1) == 1:  # Row-wise...
        other = gather_csr(other.squeeze(1), rowptr)
        pass
    elif other.size(0) == 1 and other.size(1) == src.size(1):  # Col-wise...
        other = other.squeeze(0)[col]
    else:
        raise ValueError(
            f'Size mismatch: Expected size ({src.size(0)}, 1, ...) or '
            f'(1, {src.size(1)}, ...), but got size {other.size()}.')

My code:

import torch
device='cuda'
dtype = torch.float64
from torch_sparse import SparseTensor
rowA = torch.tensor([0, 0, 1, 2, 2], device=device)
colA = torch.tensor([0, 2, 1, 0, 1], device=device)
valueA = torch.tensor([1, 2, 4, 1, 3], dtype=dtype, device=device)
A = SparseTensor(row=rowA, col=colA, value=valueA)

rowB = torch.tensor([0, 0, 1, 2, 2], device=device)
colB = torch.tensor([1, 2, 2, 1, 2], device=device)
valueB = torch.tensor([2, 3, 1, 2, 4],  dtype=dtype, device=device)
B = SparseTensor(row=rowB, col=colB, value=valueB)

C = A * B

Thanks for your help!

Jamy-L commented 1 year ago

Hi, I am encountering the exact same issue. I have tried to work around it by concatenating rows, cols and values, then coalescing using "mul" op, but sadly this operation is not implemented in torch_scatter for csr.

It looks like other is mistakenly detected as a vanilla pytorch Tensor on line 23, although it's a SparseTensor

rusty1s commented 1 year ago

This op was implemented in https://github.com/rusty1s/pytorch_sparse/pull/323, and it is not yet released. Let me create a new version ASAP.

Xparx commented 7 months ago

Hi,

I'm running into problem trying to do sparse * dense elementwise multiplication. I think it may be related to how I create the sparse tensor. I sample from a larger sparse tensor and then concatenate the rows; a = cat(b, 0), where be is a list of single row sparse matrices.

now if I try to do a * a.to_dense() I get the following error;

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
Cell In[81], line 1
----> 1 a * a.to_dense()

File [~/anaconda3/envs/env_240206/lib/python3.12/site-packages/torch_sparse/mul.py:128](http://localhost:8888/~/anaconda3/envs/env_240206/lib/python3.12/site-packages/torch_sparse/mul.py#line=127), in <lambda>(self, other)
    124         value = other
    125     return src.set_value_(value, layout=layout)
--> 128 SparseTensor.mul = lambda self, other: mul(self, other)
    129 SparseTensor.mul_ = lambda self, other: mul_(self, other)
    130 SparseTensor.mul_nnz = lambda self, other, layout=None: mul_nnz(
    131     self, other, layout)

File [~/anaconda3/envs/env_240206/lib/python3.12/site-packages/torch_sparse/mul.py:32](http://localhost:8888/~/anaconda3/envs/env_240206/lib/python3.12/site-packages/torch_sparse/mul.py#line=31), in mul(src, other)
     30     other = other.squeeze(0)[col]
     31 else:
---> 32     raise ValueError(
     33         f'Size mismatch: Expected size ({src.size(0)}, 1, ...) or '
     34         f'(1, {src.size(1)}, ...), but got size {other.size()}.')
     36 if value is not None:
     37     value = other.to(value.dtype).mul_(value)

ValueError: Size mismatch: Expected size (12, 1, ...) or (1, 74203, ...), but got size torch.Size([12, 74203]).

Am I doing something weird here?

Xparx commented 7 months ago

Looking at the source code it looks like this only works for vectors and sparse matrices (Just noticed the comment above)?

What is the best way to do sparse * dense elementwise multiplication between matrices?

Is this a safe efficient alternative?

c = a.to_dense()
a.mul_nnz(c[a.coo()[0], a.coo()[1]], layout='coo')
rusty1s commented 7 months ago

If you wanna do sparse + dense, then why not convert the sparse matrix to a dense matrix? The result will be dense anyway. Would this work for you?

Xparx commented 7 months ago

I solved it by doing that now, thank you for the response and suggestion. I thought that it would be more efficient to not do it that way. For my case sparse.mul(dense) would probably be sparse as the elements without data are assumed to be zeros. In that case the resulting matrix would have the same density as the sparse matrix.

rusty1s commented 7 months ago

If you don't want to convert to dense, you can also just do


row, col, value = a.coo()
out = b.clone()
out[row, col] += value
github-actions[bot] commented 1 month ago

This issue had no activity for 6 months. It will be closed in 2 weeks unless there is some new activity. Is this issue already resolved?