swabhs / open-sesame

A frame-semantic parsing system based on a softmax-margin SegRNN.
Apache License 2.0
229 stars 65 forks source link

Normal for prediction on unannotated data to take a long time? #23

Closed skylerilenstine closed 5 years ago

skylerilenstine commented 5 years ago

When I run

python -m sesame.targetid --mode predict \
                        --model_name fn1.7-pretrained-targetid \
                        --raw_input myinput.txt

it just keeps running (the longest I let it run was about 3.5 hours before stopping it). The output looks like

[lr=0.01 clips=94 updates=100] epoch = 2.700 loss = 6.098552 train f1 = 0.7560
[lr=0.01 clips=94 updates=100] epoch = 2.800 loss = 6.038111 train f1 = 0.7584
[lr=0.01 clips=93 updates=100] epoch = 2.900 loss = 6.046757 train f1 = 0.7628
[dev epoch=2] loss = 3.448724 p = 0.7872 (1779.0/2260.0) r = 0.7513 (1779.0/2368.0) f1 = 0.7688 -- saving to logs/predict/best-targetid-1.7-model
[lr=0.01 clips=96 updates=100] epoch = 2.1000 loss = 6.093396 train f1 = 0.7125
[lr=0.01 clips=94 updates=100] epoch = 2.1100 loss = 6.180117 train f1 = 0.7512
[lr=0.01 clips=93 updates=100] epoch = 2.1200 loss = 6.178361 train f1 = 0.7239
[dev epoch=2] loss = 3.113013 p = 0.7686 (1857.0/2416.0) r = 0.7842 (1857.0/2368.0) f1 = 0.7763 -- saving to logs/predict/best-targetid-1.7-model

and keeps spilling over.

Is this normal for the first time? I'm using the pretrained models, so I don't know why it's running forever like this (I assume that training the models is what takes forever)

swabhs commented 5 years ago

This looks like you accidentally started training the model instead of predicting using a pretrained model?