Split up the prep_for_generation operator such that we aren't setting up the tokenizer and generating tokens in the same operator, outside of the generation loop
With this split, make sure we properly handle edge cases, such as max_length=1, max_new_tokens=0, max_new_tokens=1. Also added the condition such that max_length has to be greater than 0 or we throw a ValueError
With splitting this up, we no longer have to import the process_output operator in the prep_for_generation operator and can remove the custom streaming code in the operator . Also, the non-kv cache and kv_cache pipelines now both share the same output_schema, simplifying generation logic
Also added a condition to check kv_cache capacity during prompt inference and exit if the capacity is reached before generation
Testing
Added new tests to validate the edge cases
Added a test to check if we throw an error when the cache capacity is filled during prefill
Updated the non-kv cache unit test to also compare logits
Summary
A series of improvements:
prep_for_generation
operator such that we aren't setting up the tokenizer and generating tokens in the same operator, outside of thegeneration loop
max_length=1, max_new_tokens=0, max_new_tokens=1
. Also added the condition such thatmax_length
has to be greater than 0 or we throw aValueError
process_output
operator in theprep_for_generation
operator and can remove the custom streaming code in the operator . Also, the non-kv cache and kv_cache pipelines now both share the sameoutput_schema
, simplifying generation logicTesting