smilli / berkeleylm

Automatically exported from code.google.com/p/berkeleylm
1 stars 1 forks source link

creating and reading arpa files is 1. locale dependant 2. seems to have problems with multiple tabs in the text 3. seems to have some problem with the lack of newlines #9

Closed GoogleCodeExporter closed 9 years ago

GoogleCodeExporter commented 9 years ago
I'm working with textfiles extracted from the "Reuters-21587, distribution 1.0" 
dataset and I have had trouble creating and then reading an ARPA file from it.
1. The code seems to be dependant on the use of "." as decimal separator, so 
using a german locale results in this error:
Exception in thread "main" java.lang.NumberFormatException: For input string: 
"-2,624282"
    at sun.misc.FloatingDecimal.readJavaFormatString(Unknown Source)
    at java.lang.Float.parseFloat(Unknown Source)
    at edu.berkeley.nlp.lm.io.ArpaLmReader.parseLine(ArpaLmReader.java:176)
    at edu.berkeley.nlp.lm.io.ArpaLmReader.parseNGrams(ArpaLmReader.java:148)
    at edu.berkeley.nlp.lm.io.ArpaLmReader.parse(ArpaLmReader.java:78)
    at edu.berkeley.nlp.lm.io.ArpaLmReader.parse(ArpaLmReader.java:18)
    at edu.berkeley.nlp.lm.io.LmReaders.firstPassCommon(LmReaders.java:549)
    at edu.berkeley.nlp.lm.io.LmReaders.firstPassArpa(LmReaders.java:526)
    at edu.berkeley.nlp.lm.io.LmReaders.readContextEncodedLmFromArpa(LmReaders.java:136)
    at edu.berkeley.nlp.lm.io.LmReaders.readContextEncodedLmFromArpa(LmReaders.java:131)
    at edu.berkeley.nlp.lm.io.LmReaders.readContextEncodedLmFromArpa(LmReaders.java:112)
    at edu.berkeley.nlp.lm.io.LmReaders.readContextEncodedLmFromArpa(LmReaders.java:108)
    at [...]

2. Using text files with multiple tabs results in this exception:
Exception in thread "main" java.lang.StringIndexOutOfBoundsException: String 
index out of range: -4
    at java.lang.String.substring(Unknown Source)
    at edu.berkeley.nlp.lm.io.ArpaLmReader.parseNGram(ArpaLmReader.java:200)
    at edu.berkeley.nlp.lm.io.ArpaLmReader.parseLine(ArpaLmReader.java:172)
    at edu.berkeley.nlp.lm.io.ArpaLmReader.parseNGrams(ArpaLmReader.java:148)
    at edu.berkeley.nlp.lm.io.ArpaLmReader.parse(ArpaLmReader.java:78)
    at edu.berkeley.nlp.lm.io.ArpaLmReader.parse(ArpaLmReader.java:18)
    at edu.berkeley.nlp.lm.io.LmReaders.firstPassCommon(LmReaders.java:549)
    at edu.berkeley.nlp.lm.io.LmReaders.firstPassArpa(LmReaders.java:526)
    at edu.berkeley.nlp.lm.io.LmReaders.readContextEncodedLmFromArpa(LmReaders.java:136)
    at edu.berkeley.nlp.lm.io.LmReaders.readContextEncodedLmFromArpa(LmReaders.java:131)
    at edu.berkeley.nlp.lm.io.LmReaders.readContextEncodedLmFromArpa(LmReaders.java:112)
    at edu.berkeley.nlp.lm.io.LmReaders.readContextEncodedLmFromArpa(LmReaders.java:108)
    at [...]

3. Stripping all duplicate whitespace-characters and replacing them with one 
single space resulted in another error:
Exception in thread "main" java.lang.RuntimeException: Hash map is full with 
100 keys. Should never happen.
    at edu.berkeley.nlp.lm.map.ExplicitWordHashMap.put(ExplicitWordHashMap.java:56)
    at edu.berkeley.nlp.lm.map.HashNgramMap.putHelpWithSuffixIndex(HashNgramMap.java:283)
    at edu.berkeley.nlp.lm.map.HashNgramMap.putWithOffsetAndSuffix(HashNgramMap.java:247)
    at edu.berkeley.nlp.lm.io.KneserNeyLmReaderCallback.addNgram(KneserNeyLmReaderCallback.java:171)
    at edu.berkeley.nlp.lm.io.KneserNeyLmReaderCallback.call(KneserNeyLmReaderCallback.java:148)
    at edu.berkeley.nlp.lm.io.KneserNeyLmReaderCallback.call(KneserNeyLmReaderCallback.java:37)
    at edu.berkeley.nlp.lm.io.TextReader.countNgrams(TextReader.java:80)
    at edu.berkeley.nlp.lm.io.TextReader.readFromFiles(TextReader.java:53)
    at edu.berkeley.nlp.lm.io.TextReader.parse(TextReader.java:47)
    at edu.berkeley.nlp.lm.io.LmReaders.createKneserNeyLmFromTextFiles(LmReaders.java:301)
    at [...]
I could work arround this issue by adding a newline character at the start of 
each textfile. 

I'm creating and reading the model with the following code:

    static void createModel(File dir, File arpa) {
        List<String> files = new LinkedList<>();
        for(File file : dir.listFiles())
            files.add(file.getAbsolutePath());
        final StringWordIndexer wordIndexer = new StringWordIndexer();
        wordIndexer.setStartSymbol(ArpaLmReader.START_SYMBOL);
        wordIndexer.setEndSymbol(ArpaLmReader.END_SYMBOL);
        wordIndexer.setUnkSymbol(ArpaLmReader.UNK_SYMBOL);
        LmReaders.createKneserNeyLmFromTextFiles(files, wordIndexer, 3, arpa, new ConfigOptions());
    }

    public static void main(String[] args) throws IOException {
        Locale.setDefault(Locale.US);
        File arpa = new File([...]);
        File directory = new File([...]);
        createModel(directory, arpa);
        ContextEncodedNgramLanguageModel<String> lm = LmReaders.readContextEncodedLmFromArpa(arpa.getAbsolutePath());
    }

Original issue reported on code.google.com by yannicks...@googlemail.com on 16 Oct 2012 at 8:52

Attachments:

GoogleCodeExporter commented 9 years ago
Oh, sorry, you can look at 
http://code.google.com/p/berkeleylm/source/detail?r=519 to see what the changes 
are. 

Original comment by adpa...@gmail.com on 17 Oct 2012 at 3:31

GoogleCodeExporter commented 9 years ago
Sorry, don't know what happened, but I'm traveling right now and 
debugging/editing is a challenge. I'm not sure what's causing #3, but #1 and #2 
are easy fixes. I submitted a change (revision 519) -- you can paste the 
changes in manually since only 3 lines are touched. 

Thanks for filing the report!

Original comment by adpa...@gmail.com on 17 Oct 2012 at 3:35

GoogleCodeExporter commented 9 years ago
Marking as fixed.

Original comment by adpa...@gmail.com on 9 Feb 2013 at 5:05