-
Thank you for your outstanding work. About the data presented in Table 2 of your paper, could you please provide more training and inference details? Like, how did you train the models to achieve the …
-
Hello, really appreciate for your great work.
[https://github.com/OpenGVLab/InternVideo/blob/main/InternVideo2/multi_modality/MODEL_ZOO.md](https://github.com/OpenGVLab/InternVideo/blob/main/InternVi…
-
**sub-balgrist01**
![image](https://user-images.githubusercontent.com/2482071/91921240-036f5200-ec99-11ea-85ef-024491fe8eeb.png)
**sub-beijingGE04**
![image](https://user-images.githubusercon…
-
** Metric / Plot Impacted **
- z amount threshold
- This is currently found in mouse-seeks reports Stability Report: Z-drift section.
The threshold is what should be re-evaluated and adjusted. …
-
I have a brain tumor dataset with multiple modalities.
* T1
* T1c
* T2
* T2-FLAIR
Usually, T1c has the highest spatial resolution. Which modality promises the best results?
Can I supply mult…
-
我刚刚读到了您在 AAAI 2021 上发表的精彩论文,题为“Learning Modality-Specific Representations with Self-Supervised Multi-Task Learning for Multimodal Sentiment Analysis”。我想请问一下您,在论文中您并未阐述视频音频特征提取具体使用的是什么,您方便告知一下吗?视频模块使用的…
-
This Milestone needs further clarification?
Objectives? Deliverables?
-
Basically, I would like to run video retrieval using this distilled model: https://huggingface.co/OpenGVLab/InternVideo2_distillation_models/blob/main/stage1/L14/L14_dist_1B_stage2/pytorch_model.bin
…
-
The setup of SOMA to store modalities is very similar to how mudata stores multi-modality single cell data. At the moment, there is no out of the box converter for h5mu, only for h5ad. So for storing …
-
### 🚀 The feature
**TL;DR \-** We want to lean into **modular Multi-Threading/Multi-Processing** instead of the current monolithic Multi-Processing, and steer users away from the monolithic Datase…