-
Hello @yzh119,
Currently, we are using two independent API calls for prefill and decode in a mixed batch setting. This makes defining a cuda graph layout considerably harder. Ideally, if we could d…
-
### 🐛 Describe the bug
When exporting a PyTorch model to ExecuTorch the following conversion script hangs on the line `edge_program = edge_program.to_backend(XnnpackPartitioner())`
`import torch
…
-
Hi,
Thank you for your example.
I'm trying to use this Attention example in my LSTM model. However, in `def attention layer`, the line
`h = Lambda(lambda X: K.zeros(shape=(K.shape(X)[0], n_h)))(X…
-
# Provide required information needed to triage your issue
## Your Environment
* Platform [PC desktop, Mac, iOS, Office on the web]: Office on the web (on premises)
* Host [Excel, Word, PowerPo…
-
您好!我对在使用社交网络和兴趣网络更新用户表示过程中,注意力分数的计算有些疑问。
首先,
从以上代码可以看出 gama^(k+1)_(a1) =1/2* self.consumed_items_attention,gama^(k+1)_(a2) =1/2* self.social_neighbors_attention。gama^(k+1)_(a1)和gama^(k+1)_(a2)…
-
Dear developers,
I hope this message finds you well. Firstly, I would like to express my appreciation for your excellent work on the Soot-FlowDroid module. It has been instrumental in my recent ana…
-
## Description
I am trying to figure out if TensoRT and the `pytorch_quantization` module support post-training quantization for vision transformers.
The following piece of code follows the `pyt…
RuRo updated
3 months ago
-
Hello,
I am currently using the nir_to_lava.py script to deploy and test my snnTorch network on Loihi hardware using NIR graphs. So far, I have run the test with Loihi2SimCfg as it was done in lava…
-
[Line no 226, graph_attention_learning.py, Watch Your Step] return tf.transpose(d_sum) * **GetNumNodes()** * 80, feed_dict
Why is an arbitrary scaling by the number of nodes done? I am not sure if it…
-
Hi, are we able to use it together with IPAdapter & Controlnet from Xlabs?