Open colingoldberg opened 5 years ago
Sorry, the demo models are not currently available for download. We'll look into it, but might be that there are some compatibility issues with the current version.
However, most of the models can be easily retrained with the Morpho Challenge data sets - for example the unsupervised English model should be quite the same as the output of these commands:
wget http://morpho.aalto.fi/events/morphochallenge2009/data/wordlist.eng.gz
morfessor-train -s unsup_model.bin --traindata-list wordlist.eng.gz
And the English semi-supervised model (based on the parameters shown in the demo page):
wget http://morpho.aalto.fi/events/morphochallenge2010/data/goldstd_trainset.segmentation.eng
morfessor-train -s semisup_model.bin --traindata-list wordlist.eng.gz -A goldstd_trainset.segmentation.eng -w 0.83 -W 361.32
Could you make developer-friendly interface and trained models available from an open source such as Wikipedia dumps? There's a use case for off-the-shelf decompounding and morphological splitting tools, but Morfessor doesn't have trained models ready, so its not convenient enough for developers to try. Right now even if you know how to use Morfessor, there's not really time to train and tune the models for a project where it could be useful.
Ideally splitting with Morfessor would be easy as this:
import morfessor
morfessor_model= morfessor.read_model("finnish_model.pkl")
morfessor_model.split("Lentokonesuihkuturbiinimoottoriapumekaanikkoaliupseerioppilas")
Better yet, follow the Scikit-learn API for the model, so that it is accessed using .fit() and .transform() methods. This will make it more accessible to a wider community.
I would suggest to treat model files like you would compiled executables. Store the open source licensed source data for an individual model in a single GitHub repository (possibly using git-lfs to reduce disk usage for updates), then add a Makefile or similar for automatically training the model, then attach the model binaries to each source data release. In case multiple models share source data, you could create one GitHub repository containing all the source data.
Hi,
I was wondering if the English trained model behind your demo is available for others to use. I hope this is the case.
Colin Goldberg