slavpetrov / berkeleyparser

Automatically exported from code.google.com/p/berkeleyparser
GNU General Public License v2.0
180 stars 48 forks source link

"THE BERKELEY PARSER" release 1.1 migrated from Google Code to GitHub July 2015

This package contains the Berkeley Parser as described in

"Learning Accurate, Compact, and Interpretable Tree Annotation" Slav Petrov, Leon Barrett, Romain Thibaux and Dan Klein in COLING-ACL 2006

and

"Improved Inference for Unlexicalized Parsing" Slav Petrov and Dan Klein in HLT-NAACL 2007

If you use this code in your research and would like to acknowledge it, please refer to one of those publications. Note that the jar-archive also contains all source files. For questions please contact Slav Petrov (petrov@cs.berkeley.edu).

java -jar berkeleyParser.jar -gr

The parser can produce k-best lists and parse in parallel using multiple threads. Several additional options are also available (return binarized and/or annotated trees, produce an image of the parse tree, tokenize the input, run in fast/accurate mode, print out tree likelihoods, etc.). Starting the parser without supplying a grammar file will print a list of all options.

java -cp berkeleyParser.jar edu.berkeley.nlp.PCFGLA/TreeLabeler -gr

This tool reads in parse trees from STDIN, annotates them as specified and prints them out to STDOUT. You can use

java -cp berkeleyParser.jar edu.berkeley.nlp.PCFGLA.TreeScorer -gr

to compute the (log-)likelihood of a parse tree.

java -cp berkeleyParser.jar edu.berkeley.nlp.PCFGLA.GrammarTrainer -path -out

To learn a grammar from trees that are contained in a single file use the -treebank option, e.g.:

java -cp berkeleyParser.jar edu.berkeley.nlp.PCFGLA.GrammarTrainer -path -out -treebank SINGLEFILE

This will read in the WSJ training set and do 6 iterations of split, merge, smooth. An intermediate grammar file will be written to disk once in a while and you can expect the final grammar to be written to after 15-20 hours. The GrammarTrainer accepts a variety of options which have been set to reasonable default values. Most of the options should be self-explaining and you are encouraged to experiment with them. Note that since EM is a local method each run will produce slightly different results. Furthermore, the default settings prune away rules with probability below a certain threshold, which greatly speeds up the training, but increases the variance. To train grammars on other training sets (e.g. for other languages), consult edu.berkeley.nlp.PCFGLA.Corpus.java and supply the correct language option to the trainer. To the test the performance of a grammar you can use

java -cp berkeleyParser.jar edu.berkeley.nlp.PCFGLA.GrammarTester -path -in

java -cp berkeleyParser.jar edu/berkeley/nlp/PCFGLA/WriteGrammarToTextFile

This will create three text files. outname.grammar and outname.lexicon contain the respective rule scores and outname.words should be used with the included perl script to map words to their signatures.