Open VibhuJawa opened 4 years ago
This issue has been labeled inactive-30d
due to no recent activity in the past 30 days. Please close this issue if no further response or action is needed. Otherwise, please respond with a comment indicating any updates or changes to the original issue and/or confirm this issue still needs to be addressed. This issue will be labeled inactive-90d
if there is no activity in the next 60 days.
Is your feature request related to a problem? Please describe.
Currently, the tokenized string is shorter than max_length, output is be padded with 0s. So if
max( tokenized string lengths)
<max_length
, it leads to performance penalties as the compute time forTransformer
models is often proportional to the sequence length of the input .HuggingFace's tokenizer defaults to padding to max input sequence length if
max_length
andpad_to_max_length
are not provided . We should try to follow that, this is especially beneficial for streaming cases that feature https://github.com/rapidsai/cudf/issues/5868 will help.See below example:
Padding to max sequence length.(Proposed Default Behaviour)
Padding to max_length (Current Default Behavior)
Related Implications:
a. We might have to switch from returning one-dimensional cupy arrays to 2-dimensional arrays for token-ids and attention masks which we allready do for most workflow cases so should not have performance penalties.
Describe alternatives you've considered
Currently, a user can do the tokenization twice.
to_dlpack
call.dlpack
I do above for gpu-bdb q27 HF. ), As most of the time is spent doing
to_dlpack
so this workaround should not have big performance implications.CC: @raykallen , @randerzander , @davidwendt