Closed frankxu2004 closed 2 years ago
Yes, it can support this for completing a code span. Generally, CodeT5 without fine-tuning would be more suitable for code span autocompletion with previous and after contexts, which is more aligned with span denoising objective in pre-training.
Thanks for the quick response. I wonder how long (say the number of tokens) the code span completion usually is?
We have included such details in the paper: the average of span length is 3 tokens (before subword tokenization).
The readme mentioned that this is used for Code autocompletion in VSCode, I wonder how to use CodeT5 without fine-tuning as a language model to complete code given some previous context in code?