swabhs / open-sesame

A frame-semantic parsing system based on a softmax-margin SegRNN.
Apache License 2.0
224 stars 65 forks source link

Using Open-Sesame To Parse Multiple Inputs? #24

Open EddieGao98 opened 5 years ago

EddieGao98 commented 5 years ago

I'm currently on a project that's attempting to use Open-Sesame to parse multiple text inputs at once--calling Open-Sesame to predict on all these sentences one by one from the command line has proved to be prohibitively slow, but I've noticed that most of the time from predicting on a sentence seems to come from model loading rather than from actual prediction. Is there any way to have open-sesame loaded and continuously running somewhere so that it can be called to predict on text inputs without having to be loaded every time? I believe this was done for another ASRL package, SEMAFOR, whose website (although it's currently down) seemed to be running SEMAFOR on a separate server, where it was always loaded, which it then queried to get parses for inputs without the degree of delay that calling open-sesame from the command line has--is that possible to replicate here?

silentrob commented 5 years ago

Yes, I have noticed this as well. I think most of the startup time is loading the models into memory and once that is done it should be quicker. My plan was to wrap the models in a simple TCP/HTTP server to query against.