triton-inference-server / fastertransformer_backend

BSD 3-Clause "New" or "Revised" License
411 stars 133 forks source link

Why is it needed to set max_batch_size to 1 under interactive mode? #143

Open zhypku opened 1 year ago

zhypku commented 1 year ago

Hi there,

I'm new to the FasterTransformer backend, and I'm curious about why we need to set max_batch_size to 1 when the interactive mode is enabled.

The documentation says that this is to guarantee that requests belonging to the same session are directed to the same model instance exclusively. I understand that the requests must be directed to the same model instance, but why exclusively? If we use the Direct mode of the sequence batcher, the requests would be directed to a unique batch slot. Is this sufficient to guarantee the correctness of the inference?

It would be appreciated if someone can give me some clue :)