-
Hi authors,
Congrats on the nice and inspiring survey!
Could you include the **EVE** paper on *Multimodal Instruction Tuning*? Thanks in advance.
Title: Unveiling Encoder-Free Vision-Language M…
-
I admire and am interested in your work and would like to follow up on your work. Will you make the pre-training code and training dataset public?
-
# Interesting papers
- Yan 2024 - An Object is Worth 64x64 Pixels: Generating 3D Object via Image Diffusion [링크](https://omages.github.io/)
- Diffusion을 통해서 64 x 64 크기의 '부품 이미지' (Object image)…
-
Paper : [https://arxiv.org/pdf/2406.16860](https://arxiv.org/pdf/2406.16860)
Website : [https://cambrian-mllm.github.io](https://cambrian-mllm.github.io)
Code : [https://github.com/cambrian-mllm/cam…
-
### Question
Great work! I saw that both the pre-training and instruction-150K dataset has the token inserted in the same format. I was wondering why during the pre-training stage of feature alignme…
-
Paper: Mitigating Hallucination in Large Multi-Modal Models via Robust Instruction Tuning
link: https://arxiv.org/pdf/2306.14565.pdf
Name: LRV-Instruction
Focus: Multimodal
Notes: A benchmark to e…
-
Do you have a plan to release training code?
-
Hi!
Thank you for the wonderful work.
I wonder if you can provide detailed information on training SEED Tokenizer.
I cannot find the hyperparameter for training SEED Tokenizer in your paper.
A…
-
Thank you for amazing project.
Can you provide the training code?
-
Hi,
I am wondering if at any point the training code for SEED-LLaMA will be made available?