A plugin for the GATE language technology framework for training and using machine learning models. Currently supports Mallet (MaxEnt, NaiveBayes, CRF and others), LibSVM, Scikit-Learn, Weka, and DNNs through Pytorch and Keras.
The temporary annotation set LF_SEQ_TMP is not getting cleared before we use it in the LF_ApplyChuking pr, so if multiple application PRs are used the annotations in the set accumulate. This should get cleared before use and possibly also after use (though leaving it after use could help with debugging).
The set now always gets cleared before applying the model and it gets cleared when the apply chunking pr finishes, except when the "debug" parameter is true.
The temporary annotation set
LF_SEQ_TMP
is not getting cleared before we use it in theLF_ApplyChuking
pr, so if multiple application PRs are used the annotations in the set accumulate. This should get cleared before use and possibly also after use (though leaving it after use could help with debugging).