nitishgupta / nmn-drop

Neural Module Network for Reasoning over Text, ICLR 2020
122 stars 14 forks source link

Accelerate predictors #7

Closed jzbjyb closed 4 years ago

jzbjyb commented 4 years ago

Thanks for this interesting work and the released code! I am currently using drop_demo_predictor to predict answers on my dataset of <passage, question> pairs, but it seems to be quite slow (even with GPUs). I guess it's because predict_json only processes one example at a time (not in a batch) and some preprocessing that converts the raw passage/question pair into your internal format. Any idea about improving speed?

nitishgupta commented 4 years ago

You can pass the batch_size argument to the allennlp predict command to process multiple examples. You cannot really speed up the preprocessing.

How fast/slow is it running for you?

jzbjyb commented 4 years ago

I am using the predict_batch_instance function in the drop_demo_predictor predictor with a batch size of 16, and it took me 0.2s to preprocess (_json_to_instance) and 0.06s to run the model (predict_batch_instance) per example. I guess the best practice would be converting my dataset into your format (tokenize.py) and run evaluate.sh instead of predict.

nitishgupta commented 4 years ago

Yes, I guess. That would save a lot of processing time.

nitishgupta commented 4 years ago

BTW, thank you for those time points. I working on improving the code and look into how I can speed it up.