-
Thanks for releasing the code and pretrained models of your amazing work "Audio-Visual Instance Discrimination with Cross-Modal Agreement". I noticed that you used different architectures for R(2+1)…
-
Thank you very much for your excellent work.
One problem I am confused about is the definition of the `crossmodal loss function` and` coseparation loss function`. In the train.py, why random numbers …
-
-
[The format of the issue]
Paper name/title:
Paper link:
Code link:
-
Pose a question about one of the following articles:
“[Online images amplify gender bias](https://www.nature.com/articles/s41586-024-07068-x),” 2024. Guilbeault, Douglas, Solène Delecourt, Tasker …
-
We'd like to develop general functions for predicting one layer of data from the other data type for each dataset (e.g. predict protein abundances from RNA, predict RNA from ATACseq).
Eventually, t…
-
Hi, thank you very much for sharing out your work and giving me the opportunity to learn more about your article.
I think your proposed method opens up a new direction for solving discrete constrai…
-
Similar to https://demo.allennlp.org, it would be great to have online demo applications for various models available in gluonnlp.
List of demos from [AllenNLP](https://demo.allennlp.org/reading-c…
-
First of all thank you for the code! Just two slight remark:
- The implementation of the [MVAE model provided](https://github.com/masa-su/pixyz/blob/master/examples/mvae_poe.ipynb) considers KL div…
-
### 🚀 The feature, motivation and pitch
🤗 Hello! Thank you for your work!
I see model configurations which working with certain modalities in this repo and it is great.
I have a question thoug…