I am currently using the generate.py script in order to paraphrase sentences in a given input file. However, I was wondering is there a way to use the code (fairseq_simplifier) in an API-fashion, that is sentence by sentence, where the best model will be always in RAM so the output will be faster?
I am currently using the
generate.py
script in order to paraphrase sentences in a given input file. However, I was wondering is there a way to use the code (fairseq_simplifier) in an API-fashion, that is sentence by sentence, where the best model will be always in RAM so the output will be faster?