-
I am working with a large single-nucleus ATAC dataset (150k nuclei, 360k peaks), but have gotten stuck at the "Caching data to disk" section of the "Atlas-level integration" [tutorial](https://mira-mu…
-
Hi,
Thanks for developing this great tool. I just have 2 questions regarding CITE-seq-Count.
Question1:
So, I have a multimodal scRNAseq dataset in which both CITEseq and Hashtag antibodies wer…
-
In the file `llava/model/llava_arch.py` under the class `LlavaMetaForCausalLM` there is a function`prepare_inputs_labels_for_multimodal` that is called when calling the `generate` and `forward` functi…
-
For simplicity, we will always assume that data is 3 dimensional with the dimensions being:
(subject_id, event, measurement)
Optionally, there can be a fourth dimension with tokenized text, a sequen…
-
Hello,
Thank you very much for the repository and the provided pretrained models. I have seen that you added some scripts for nnUNet v2 to run fine-tuning as well as the STUNetTrainer.
I have a da…
-
## Motivation #
There is significant interest in vLLM supporting encoder/decoder models. Issues #187 and #180 , for example, request encoder/decoder model support. As a result encoder/decoder supp…
-
### Question
I tried using the generate_multimodal_pages method from the official documentation example and attempted to apply it. I wanted to export `content_md` with `ImageRefMode.EMBEDDED`, bu…
-
For methods to support for multimodal modeling we should support:
- [ ] No Fusion approach: Contrastive workflows between modalities where a single embedding is generated for each modality (and/or te…
-
Thank you very much for promoting the EANN model, I read your paper and wanted to get your EANN model code on the Twitter multimodal fake news dataset to help me complete the reproduction, because the…
-