Closed ocramz closed 2 years ago
This can happen if you didn't change max_length_generation
, by default it is 2048 but your model's context size is 512 and codeparrot's is 1024 hence the index out of range
error.
If that doesn't work can you share your execution command because it works for me?
Thank you @loubnabnl , that is indeed the case.
I wonder if we could remove this particular footgun, by making max_length_generation
not a free parameter but rather a function of the specific model used?
Actually sometimes you don’t need to use the whole context size of the model for this parameter to speed up the generation, for example for HumanEval and MBPP benchmarks the prompts and their solutions are usually short and max_length_generation
doesn’t need to be more than 512.
But we can reduce the default value to 1024 or 512.
The call to
.generate()
in utils.pycomplete_code()
seems to be mis-configured, since it produces the stack trace below.Here I use
model='hf-internal-testing/tiny-random-gpt2'
(but codeparrot fails in the same way), andallow-code-execution=True