-
I am interested in performing multimodal cross-attention. I don't see issues in performing self-attention in encoder since i can use the `BertAttention` plugin. However, cross-attention would have `qu…
-
I would like to know the details of cross-modal training, is there any relevant training code that is responsible for performing this part?
-
Hi! Thanks for open-sourcing APE, it is fantastic! 👍
I am new to the field of open-vocabulary vision foundation models, and I have some questions on the "gated cross-modality interaction" when goi…
-
Hi guys,
The reference human PBMC data shows "403 ERROR" when I was trying to download it while running Seurat V5 Vignette "Dictionary Learning for cross-modality integration." I can neither downlo…
-
Hi,
I wonder know if GLUE can generate cross modality data, like you have a pre-trained model, and if you can generate scATAC-seq data using the scRNA-seq data?
-
Hi, thanks for your good job.
```
# Latent Fusion
def fusion(self, audio_tokens, visual_tokens):
# shapes
BS = audio_tokens.shape[0]
# concat all the tokens
…
-
### Describe your problem in detail.
Currently, task events are presented as modality-specific.
### Describe what you expected.
That task events are cross-modality.
### BIDS specification section
…
-
## 一言でいうと
VQAのような、画像+言語のタスクでTransformerを適用した研究。画像は物体領域の位置ベクトルを使ってSelf-Attention(物体間の関係を学習)、言語は通常通りSelf-Attention、最後にCross(言語to画像、画像to言語のAttention計算)をした後にSelf-Attentionをとって出力を行う。事前学習を通じSOTAを達成
![…
-
Hi!
Great work! Congratulations! Thanks for releasing the code!
However, I am not able to reproduce the results for taskrunners using any of the `allenai/uio2-large`, `allenai/uio2-xl` or `allen…
-
## User Story
As an OCTO PO, I want to know trends in imposter components within the Collab Cycle so that I have data and metrics to report to my leadership.
Assignee: @it-harrison @allison0034
Peer…