-
Hi
I am trying to work on the new paper: "Multimodal End-to-End Autonomous Driving". Is there a dataset with depth or semantic segmentation available?
Thanks in Advance.
-
### Feature request
Update: see https://github.com/OpenAdaptAI/OpenAdapt/issues/760#issuecomment-2347337901 for the latest requirements.
We want to be able to give the model the ability to:
1. …
-
## Description
I wish to extend the multimodal library and the code in the Convlora example
To fine-tune SAM through ConvLoRA, I hope to use the prompter of SAM itself to input point and label for…
-
Thank you for making the code available. I'd like to ask a question about Unified generative adversarial networks for multimodal segmentation from unpaired 3D medical images.
The paper extends the…
-
Here's the relevant example: https://forum.spinalcordmri.org/t/registration-between-two-segmentations-multimodal/818/4?u=jcohenadad
we could add it as a "demo" either on our tutorial, and/or on our…
-
### Describe the issue linked to the documentation
Could you please provide an example or improve the documentation on how to run multi-class segmentation with `MultiModalPredictor`. From the current…
-
Hi, I have read your paper "TransBTS: Multimodal Brain Tumor Segmentation Using Transformer", which implements transformer module in 3D medical image segmentation task. This task is absorbing, and I h…
-
Hi, I am going to submit my paper about semantic segmentation. I am wondering which subject should I choose. Could you please share you choice about the SUBJECT AREAS with me?
Subject Areas:
Deep …
-
Hello,
Actually working on a multimodal pixel wise segmentation, my dataset is composed of multi inputs with only one ground truth (times N_images).
From that situations, the ideal case would be t…
-
Thank you for your work!
Now I would like to directly to GPT4v input the image and a prompt like “This is an image, now I need to do the visual grounding task where you generate the coordinates [x,y,…