OpenNMT / CTranslate2

Fast inference engine for Transformer models
https://opennmt.net/CTranslate2
MIT License
3.28k stars 287 forks source link

target_prefix latency #1689

Open SimonBenhamou opened 5 months ago

SimonBenhamou commented 5 months ago

Hello,

I noticed that when supplying a target_prefix to the translate_batch or generate_tokens method, the latencies for generating the supplied tokens is equivalent to the situation where they are not provided, while I would expect negligible latency because those tokens don't require any generation steps. I'm expecting the first step to be the generation of the token after the prefix tokens.

Am I missing something, or is this due to an inefficiency in ctranslate2's generation logic ?

Thanks, Simon

minhthuc2502 commented 5 months ago

If you specified the target_prefix, it would decode once in a step then generate one by one with the next steps. Without target_prefix, it would generate one by one token. In theory, it have to run faster in case of using target_prefix. Could you test with a long prefix ?

SimonBenhamou commented 5 months ago

I did, and could reproduce the fact that