-
Hi Tao,
Just noticed that you created a FedGraph repo a few years ago.
We are developing the FedGraph library, https://github.com/FedGraph/fedgraph.
Welcome to join us if you are still intere…
-
# On the Global Self-attention Mechanism for Graph Convolutional Networks [[Wang+, 20](https://arxiv.org/abs/2010.10711)]
## Abstract
- Apply Global self-attention (GSA) to GCNs
- GSA allows GCNs…
-
From https://github.com/pytorch/pytorch/pull/133065#issuecomment-2288701447 . Basically, there was a noticeable performance drop on the inference side after bumping up the HF pin, [dashboard](https://…
-
### Description
This spike aims to draft the initial architecture of Kyma Companion agents, focusing on high-priority capabilities, their communication, interaction with external systems, and ensurin…
-
### Please describe the purpose of the feature. Is it related to a problem?
I am inquiring about possibly integrating JAX-based Graph Neural Networks (GNNs) into MAVA for use in MARL. Many MARL algor…
-
Thanks for reporting the bug. Please ensure you've gone through the following checklist before opening an issue:
- Make sure you can reproduce this issue using the latest released version of `Microso…
-
### 🐛 Describe the bug
I'm trying to do attention on a batch-of-zero, because my program uses a static graph and I rely on zero-batching (index_select zero-batch of inputs, index_add zero-batch of ou…
-
You use the for-loop for multi-head. (Time `x` The Number of Heads)
And also use the for-loop for Graph attention. (Time `x` The Number of Graph)
It will be very slow.
Is there any other way to…
-
# 🐛 Bug
I am trying to use `memory_efficient_attention` with `torch.compile()`. But it seems that `memory_efficient_attention` leads to graph breaks.
`xformers.ops.unbind` also causes graph breaks…
-
when using this node, error happens
Error occurred when executing STATIC_TRT_MODEL_CONVERSION:
Exporting the operator 'aten::scaled_dot_product_attention' to ONNX opset version 17 is not support…