kentonl / e2e-coref

End-to-end Neural Coreference Resolution
Apache License 2.0
524 stars 173 forks source link

Faster inference #33

Closed nitishgupta closed 6 years ago

nitishgupta commented 6 years ago

Is there a way to run considerably faster inference for a low tradeoff in accuracy without retraining the model? For example, any parameters I can change in the experiments.conf.

kentonl commented 6 years ago

Yes, you can lower max_top_antecedents and top_span_ratio at test time without retraining, since these are essentially just beam sizes.

You can also reduce the max_span_width without retraining, but that will require some hacking here: https://github.com/kentonl/e2e-coref/blob/master/coref_model.py#L365, since it will try to restore a variable from the checkpoint with the original max_span_width (you can simply hard code the variable shape here).

kentonl commented 6 years ago

You can also try setting coref_depth to 1. The effect would be similar to taking a CRF model, dropping the higher-order factors, and just using the unary potentials to make predictions.

nitishgupta commented 6 years ago

Thanks, that was useful. Due you think the setting coarse_to_file = false is a good idea?

kentonl commented 6 years ago

Probably not, the coarse to fine inference makes the model both faster and more accurate.