Open cedar33 opened 5 years ago
Hi @cedar33 ,
Yes, there is an efficient way to predict. Have a look at the serve.py
script. It provides an example of how to use a predictor on a batch of size 1. You can create bigger batches as long as you pad the inputs. It will be efficient.
After batch size =50 we read 50 lines from file so server.py code will be like this ????
def file_read_from_head(fname, nlines):
from itertools import islice
with open(fname) as f:
for line in islice(f, nlines):
print(line)
return nlines
fifty_LINE= file_read_from_head('example1.txt', 50) #we read 50 line from file
if __name__ == '__main__':
export_dir = 'saved_model'
subdirs = [x for x in Path(export_dir).iterdir()
if x.is_dir() and 'temp' not in str(x)]
latest = str(sorted(subdirs)[-1])
predict_fn = predictor.from_saved_model(latest)
for LINE in fifty_LINE:
words = [w.encode() for w in LINE.split()]
nwords = len(words)
predictions = predict_fn({'words': [words], 'nwords': [nwords]})
print(predictions)
#Loop over each line ??? or just
#words = [w.encode() for w in fifty_LINE.split()] is enough
Thanks a lot, this is exactly what I need, I have learned so much from your code, thank you again
Thanks a lot, this is exactly what I need, I have learned so much from your code, thank you again
Much appreciated Brother
Kind regards Ahmad
I have over 60,000,000 sequences to analysis, when I use this way to predict it will takes more than 1 second per sequence, I want to speed it up, is it possible?