This is the output when I run python run.py preprocess experiments/spider-configs/gap-run.jsonnet.
As you can see, it takes 25mins to preprocess on the train set.
Make sure that your Stanford NLP service is responsible.
If you did not see similar preprocessing time, one possible issue is your file system. If you store your databases on filesystems that will cache/commit every action you made (for recovery), the preprocessing will be very slow.
This is the output when I run
python run.py preprocess experiments/spider-configs/gap-run.jsonnet
.As you can see, it takes 25mins to preprocess on the train set.
Make sure that your Stanford NLP service is responsible. If you did not see similar preprocessing time, one possible issue is your file system. If you store your databases on filesystems that will cache/commit every action you made (for recovery), the preprocessing will be very slow.
Let me know if you have other questions.
Peng