NVlabs / VILA

VILA - a multi-image visual language model with training, inference and evaluation recipe, deployable from cloud to edge (Jetson Orin and laptops)
Apache License 2.0
1.79k stars 140 forks source link

How to run longvila large context, sequence parallel inference? #130

Open zadeismael opened 2 weeks ago

zadeismael commented 2 weeks ago

There are multiple mentions of a multi modal sequence parallel system for inference which can be seamlessly integrated with HF transformers. However, I am not able to follow this through the codebase OR see this exhibited in any of the scripts / examples.

Can the team please:

  1. Point me to the code that enables long context, sequence parallel inference for generation?
  2. Provide an examples script to run this inference (preferably the same script used for the eval metrics mentioned in the paper?

Mentions of inference in the longvila paper: Section 1: For inference, the memory usage of KV cache will also be a bottleneck when the sequence length is very long, we thus implement the inference mode of our MM-SP to support long context multi-modal language deployment.

Section 3.3 Thus, we implement sequence parallelism for VLMs distributed inference. Compared to the training mode, the system needs to additionally maintain tensors (e.g. input tokens and position encodings) that are progressively changing during the decoding phrase (Yu et al., 2022). In addition, the system needs to detect signals from the machine that holds the last token and accordingly terminate the distributed process.

Section 5(.1)

Lyken17 commented 2 weeks ago

@DachengLi1 @yukang2017

DachengLi1 commented 2 weeks ago

Hi @zadeismael Thank you for the notice! This is an active PR that will be merged very soon (within days).

hb-jw commented 2 weeks ago

Hello, I am also very interested in sequence parallel inference. May I ask when you plan to open-source the code for sequence parallel inference?

DachengLi1 commented 1 week ago

@hb-jw Thank you! We are undergoing the final merging check in our internal codebase for this PR, and will be ready very soon (If everything goes well, it should be mid this week).

hb-jw commented 1 week ago

Hello,today is Friday,I want to ask if everything goes well?

DachengLi1 commented 1 week ago

@hb-jw Hi there, sorry for the delay. We have worked out the version update. We are working on integrating with the vision needle-in-a-haystack before OSS this PR.

zade-twelvelabs commented 1 week ago

@DachengLi1 Thanks for the update - can you let us know a new expected date?

DachengLi1 commented 6 days ago

@zade-twelvelabs I will allocate more bandwidth to the task, and hopefully finish it by this Thursday. Thanks for your patience, and apologize the delay!

hb-jw commented 5 days ago

@zade-twelvelabs I will allocate more bandwidth to the task, and hopefully finish it by this Thursday. Thanks for your patience, and apologize the delay!

OK!Thank you for your effort and open-source, I like the project about sequence parallel very much and check if it is open-source every day,please reply me when it open-sourced! Thank you again!

zade-twelvelabs commented 4 days ago

@DachengLi1 Echoing @hb-jw 's comment - thanks for the prioritization :)

hb-jw commented 3 days ago

Thank you for your amazing work! It's already Thursday, and I've been looking forward to it for a long time. Could you please tell me when the sequence parallel code will be open-sourced?

DachengLi1 commented 3 days ago

Hi @hb-jw Sorry we have an internal regression that leads to small accuracy mis-match. If you are finding a quick solution, we have an implementation here: https://github.com/NVlabs/VILA/tree/main/llava/eval/vision_niah_vila.

zade-twelvelabs commented 3 days ago

This is a non generative example though, right? Can this be used for next token generation?