Closed nestyme closed 5 years ago
@nestyme — can you paste the first few lines of your tokens file and lexicon file?
Thanks for the quick response!
@nestyme — can you paste the first few lines of your tokens file and lexicon file?
Surely, I'm running the model on Russian DS and I did't manage to quickly understand how to add utf-8, so I converted everything in Russian DS, lexicon.txt, lm and tokens.txt to lowercase and uppercase English letters. So I have in first lines: 1) tokens.txt a b v g d e E Z (here e and E for different rus letters) 2) lexicon.txt a a aalto a a l t o aaron a a r o n aatakni a a t a k n i abbas a b b a s abbatstvo a b b a t s t v o
Thanks!
@nestyme Seems like you are using /workspace/code/Small_/wav2letter/small_lexicon.txt. Is that the file you really want to use? The only word loaded is the default unknown word <UNK>
.
Here is the lexicon loading code -- https://github.com/facebookresearch/wav2letter/blob/master/src/common/Utils.cpp#L306. Sorry we don't validate the lexicon path right now. Will send out a fix later.
@nestyme Seems like you are using /workspace/code/Small_/wav2letter/small_lexicon.txt. Is that the file you really want to use? The only word loaded is the default unknown word
<UNK>
. Here is the lexicon loading code -- https://github.com/facebookresearch/wav2letter/blob/master/src/common/Utils.cpp#L306. Sorry we don't validate of the lexicon path right now. Will send out a fix later.
Yes, It this is the file I want to use, I renamed It on purpose and added the path into config. OK, I'll carefully read the code with the lexicon loading and try to fix. Thank you!
Hi, I am trying to run decoding on custom DS brought to w2l shape, but during the process I got [Words] 1 tokens loaded. Tokens.txt is a file contains encoded Roman alphabet symbols (each from new line) lexicon.txt contains words from my dataset with spellings (each from new line). Trainig is running sucsessfully.
Here's the output: I0205 08:27:30.232817 18981 Decode.cpp:111] Gflags after parsing --flagfile=; --fromenv=; --tryfromenv=; --undefok=; --tab_completion_columns=80; --tab_completionword=; --help=false; --helpfull=false; --helpmatch=; --helpon=; --helppackage=false; --helpshort=false; --helpxml=false; --version=false; --adambeta1=0.90000000000000002; --adambeta2=0.999; --am=/workspace/code/Small/wav2letter/runtime/_conv_prelu/001_modellast.bin; --arch=network.arch; --archdir=/workspace/code/Small/wav2letter/; --attention=content; --attnWindow=no; --batchsize=1; --beamscore=25; --beamsize=1000; --channels=1; --criterion=asg; --critoptim=sgd; --datadir=/workspace/code/Small_/wav2letter/; --dataorder=input; --devwin=0; --emission_dir=; --enabledistributed=false; --encoderdim=1; --eostoken=false; --everstoredb=false; --fftcachesize=1; --filterbanks=40; --flagsfile=/workspace/code/Small/wav2letter/decode.cfg; --forceendsil=false; --gamma=1; --garbage=false; --input=flac; --inputbinsize=100; --inputfeeding=false; --iter=1000000; --itersave=false; --labelsmooth=0; --leftWindowSize=50; --lexicon=/workspace/code/Small_/wav2letter/small_lexicon.txt; --linlr=-1; --linlrcrit=-1; --linseg=1; --lm=/workspace/code/lm/; --lmtype=kenlm; --lmweight=2.5; --localnrmlleftctx=0; --localnrmlrightctx=0; --logadd=false; --lr=0.10000000000000001; --lrcrit=0.10000000000000001; --maxdecoderoutputlen=200; --maxgradnorm=1; --maxisz=9223372036854775807; --maxload=-1; --maxrate=10; --maxsil=50; --maxtsz=9223372036854775807; --maxword=-1; --melfloor=1; --memstepsize=10485760; --mfcc=false; --mfcccoeffs=13; --mfsc=true; --minisz=0; --minrate=3; --minsil=0; --mintsz=0; --momentum=0.5; --netoptim=sgd; --noresample=false; --nthread=1; --nthread_decoder=8; --onorm=target; --optimepsilon=1e-08; --optimrho=0.90000000000000002; --outputbinsize=5; --pctteacherforcing=100; --pcttraineval=100; --pow=false; --replabel=0; --reportiters=0; --rightWindowSize=50; --rndvfilepath=; --rundir=/workspace/code/Small/wav2letter/runtime/; --runname=marysya_conv_prelu; --samplerate=16000; --samplingstrategy=rand; --sclite=; --seed=0; --show=true; --showletters=true; --silweight=-0.5; --skipoov=false; --smearing=max; --softwoffset=10; --softwrate=5; --softwstd=5; --sqnorm=true; --stepsize=1000000; --surround=|; --tag=; --target=tkn; --targettype=video; --test=small_test; --tokens=tokens.txt; --tokensdir=/workspace/code/; --train=train; --trainWithWindow=false; --transdiag=0; --unkweight=-inf; --valid=validate; --weightdecay=0; --wordscore=1; --world_rank=0; --world_size=1; --alsologtoemail=; --alsologtostderr=false; --colorlogtostderr=false; --drop_log_memory=true; --log_backtrace_at=; --log_dir=; --log_link=; --log_prefix=true; --logbuflevel=0; --logbufsecs=30; --logemaillevel=999; --logmailer=/bin/mail; --logtostderr=true; --max_log_size=1800; --minloglevel=0; --stderrthreshold=2; --stop_logging_if_full_disk=false; --symbolize_stacktrace=true; --v=0; --vmodule=; I0205 08:27:30.232967 18981 Decode.cpp:117] Number of classes (network): 33 I0205 08:27:30.232990 18981 Utils.cpp:337] [Words] 1 tokens loaded. I0205 08:27:30.233005 18981 Decode.cpp:121] Number of words: 1
Could you help me please? Thanks!