I have trained Dictionary Segmentation, Dictionary Body Segmentation and Lexical Entry Segmentation with my own data (see this post). When running the Lexical Entry segmentation service, I get an error when attempting to parse a PDF that contains entries that are longer than 2 pages, which I don't get when parsing a PDF without long entries:
Error encountered while requesting the server.
[GENERAL] Model file does not exists or a directory: /grobid/grobid-dictionaries/../grobid-home/models/form/model.wapiti
I have trained Dictionary Segmentation, Dictionary Body Segmentation and Lexical Entry Segmentation with my own data (see this post). When running the Lexical Entry segmentation service, I get an error when attempting to parse a PDF that contains entries that are longer than 2 pages, which I don't get when parsing a PDF without long entries:
errorlog.txt