Open3DA / LL3DA

[CVPR 2024] "LL3DA: Visual Interactive Instruction Tuning for Omni-3D Understanding, Reasoning, and Planning"; an interactive Large Language 3D Assistant.
https://ll3da.github.io/
MIT License
249 stars 9 forks source link

Code release for models #2

Open KairosXu opened 11 months ago

KairosXu commented 11 months ago

Thanks for your nice work! But when I tried to run the code for training LL3DA, I found that the "models" module was missing. Is that correct? If so, could you tell me when your team will release all the code and the training/evaluation script? Hope for your reply soon!

ch3cook-fdu commented 11 months ago

Thanks for your interest in our work! We will gradually upload the codes, weights, and training/evaluation scripts starting in late December. Please stay tuned.

KairosXu commented 11 months ago

Sorry to bother again. Due to the excellent performance LL3DA has achieved, we would like to conduct some further research based on your nice work. Could you please release your model checkpoints and training/evaluation codes asap? Thanks and hope for your reply soon!

ch3cook-fdu commented 11 months ago

Thank you for your recognition of our work and sorry for the delay. As we are validating the reproducibility of our code and the extensibility to support different large language model backends, it may take a few days. After the verification, we will the release as soon as possible!

KairosXu commented 10 months ago

Sorry for bothering. Here are some questions about the Interact3D module.

  1. Since the original Q-Former architecture in BLIP-2 requires the input feature dimension to be 1408, did the scene feature after the scene encoder keep the same?
  2. I found that you added an extra visual prompt compared to 3D-LLM, so I would like to ask how you organized the architecture of Interact3D? And how did the self-attention work with the additional input in the module?
  3. Did your pipeline also need the text instructions in the inference phase? Or only need the 3D feature and visual prompt like BLIP-2? If the former, did the text instruction play the role of the condition? And how did it work? Hope for your reply soon!
ch3cook-fdu commented 10 months ago

Thanks for your interest!

  1. In practice, you can customize the encoder_hidden_size within InstructBlipQFormerConfig for our multimodal transformer. We also adopt an FFN to project the scene feature.

    InstructBlipQFormerConfig(
    num_hidden_layers=6,
    encoder_hidden_size=self.encoder_hidden_size
    )
  2. We pad the visual prompts with 0s, and set attention_mask for self-attention. You can look into https://huggingface.co/docs/transformers/model_doc/instructblip#transformers.InstructBlipQFormerModel for more information on implementation.

  3. Yes, we need text instructions for inference. The visual prompts are optional. Text instructions play two roles in our architecture: 1) conditional feature aggregation in multi-modal transformer, and 2) conditional text generation in LLM.

gujiaqivadin commented 10 months ago

Hello, @ch3cook-fdu Thanks for your paper and code! Any news for the training/testing main code update?

ch3cook-fdu commented 9 months ago

Thrilled to announce that our paper is accepted to CVPR 2024! The code is now released!

Please stay tuned for our further updates!