uwsampl / SparseTIR

SparseTIR: Sparse Tensor Compiler for Deep Learning
https://sampl.cs.washington.edu/SparseTIR/
Apache License 2.0
127 stars 12 forks source link

[Bug] Can SparseTIR run GAT end-to-end directly? #100

Open Ed-gong opened 1 month ago

Ed-gong commented 1 month ago

The paper and project look super interesting to me, but there are several questions that confused me and I listed those questions below.

Questions

  1. In example/spmm folder, the Python code evaluated the kernel for unweighted SpMM, which is used in GCN. (The corresponding DGL kernel is “dgl.ops.copy_u_sum(g, x)”. Is there any code to test the weighted spMM, which is used in the GAT case? For example, DGL provides weighted SpMM named as: update_all(fn.u_mul_e('ft', 'a', 'm'), fn.sum('m', 'o')) .Do SparseTir provide similar kernel and how can we compare their kernel performance?

  2. Is there any code in this repo that can run GAT end-to-end directly?

  3. For GCN, the papers said that it was integrated into a Framework for end-to-end training. Could you provide more information about this framework? Such as which integrated framework is used, DGL or PyG?

  4. The paper said that format decomposition is applied to SpMM only, Could we apply it to SDDMM also and evaluate its kernel running time result?

Looking forward to your response. Thank you.

yzh119 commented 1 month ago

For Q3: The end-to-end evaluations are available at https://github.com/uwsampl/sparsetir-artifact . Regarding Q1,Q2, yes the same technique also applies to weighted spmm and we can use SparseTIR for GAT if you use weighted SpMM and SDDMM kernels generated by SparseTIR. However I don't have bandwidth to do them at the moment.

Regarding Q4, yes composable formats should also apply to SDDMM.