Thanks for the cool model and repo.
New to python and pytorch. Using ubuntu 18.04 and python 3.6.8
Have inference working fine using 512 and 256 models with prompt on local 8 gig gpu.
I would appreciate if you could suggest coding change that would allow pytorch_generation.py to read an input text file line-by-line instead of manually entering each prompt.
Format of text file would be same as prompt.
For example:
Books This is the first line.
Books This is the second line.
Books This is the third line.
etc.
Thanks for the cool model and repo. New to python and pytorch. Using ubuntu 18.04 and python 3.6.8 Have inference working fine using 512 and 256 models with prompt on local 8 gig gpu.
I would appreciate if you could suggest coding change that would allow pytorch_generation.py to read an input text file line-by-line instead of manually entering each prompt.
Format of text file would be same as prompt.
For example:
Books This is the first line. Books This is the second line. Books This is the third line. etc.
Cheers.