Unfortunately, no. The long documents already provide natural parallelism, and the memory is quite constrained as is. The implementation of the EMNLP 2017 version in AllenNLP supports batching, but it doesn't have all the bells and whistles required for the best accuracy. If trading off accuracy for speed is what you need, I recommend checking them out (https://github.com/allenai/allennlp/blob/master/allennlp/models/coreference_resolution/coref.py)
Unfortunately, no. The long documents already provide natural parallelism, and the memory is quite constrained as is. The implementation of the EMNLP 2017 version in AllenNLP supports batching, but it doesn't have all the bells and whistles required for the best accuracy. If trading off accuracy for speed is what you need, I recommend checking them out (https://github.com/allenai/allennlp/blob/master/allennlp/models/coreference_resolution/coref.py)