Open enzoampil opened 4 years ago
Apparently, no need to create a transformers.pipeline
, all you need is to do model.generate()
to generate text from a language generation model.
Amazing!
https://github.com/huggingface/transformers/issues/3728#issuecomment-611797988
However, there is still value add since transformers.pipeline
adds the seamless model download experience + allows access to different models from the same API
Note: make sure to use GPT2LMHeadModel
instead of GPTModel
. It's the former that has the generate
method.
Had a "no padding_id" warning from doing model.generate()
, but learned from the issue response below that this is expected behaviour for GPT2
https://github.com/huggingface/transformers/issues/2630
The current module still has a lot of unnecessary code coming from the CLI implementation from
transformers
. These are better as standalone functions that take default arguments from a separate config file (potentiallyconfig.yaml
).Potential output:
Create new
GenerationPipeline
under HuggingFace'stransormers.pipeline
module.