Open amirj opened 8 years ago
The way the BookCorpus was created is different from how training models work, at least using the code in training/tools.py. Their native model contains both 'utable' and 'btable', but when you train the model yourself it contains only 'table'. I'm not sure why they coded it that way, and it is frustrating, because all their other methods depend on both tables. It requires some acrobatics to make things work. I didn't test any of their experiments, but I did write interface code that mostly works with functionality present given the embedding space. The code is here: https://github.com/danielricks/penseur
A model has been trained according to the instruction here. I can load the model using the following commands:
After that, I want to run an experiment (for example:
Semantic-Relatedness
):Here is the output:
Would you please help me to run experiments on the trained model?