Closed EmbraceLife closed 7 years ago
I would like to add some documentation for this example, but it is ready to try!
You should be able to run
bash steps.sh
This will process the data, show you the data, train, and show you the outputs of the model.
If it doesn't work, please post tracebacks. You can execute the steps one by one to see what's going on and debug.
it is working, thanks! I am looking forward to your docs too.
I ran bash steps.sh
, though it works, it seems very slow. for each epoch it estimate over 30 minutes run.
In fact, I trained 30 minutes, and could just finish half of an epoch.
In order to make it run faster, I want to sample a small subset of the data, so I added provider
to train and validate sections as below
train:
data:
- jsonl: ../data/train.jsonl
provider:
num_batches: 2
epochs: 1
weights:
initial: inital.w.kur
best: best.w.kur
last: last.w.kur
log: log
validate:
data:
- jsonl: ../data/validate.jsonl
provider:
num_batches: 1
weights:
initial: inital.w.kur
best: best.w.kur
last: last.w.kur
However, loading log still take a long time. Is it normal, why is it? Is there a way for me to train a small subset of data and make it fast for experiment?
Thanks
The log
file is probably just large-ish. Shouldn't take long to load, though. Try deleting the log. If you don't want a log, just remove log: log
entirely, or try logging less data with:
log:
path: log
keep_batch: no
Hi @noajshu when I train with the default kurfile on mac, it took me 30 minutes to finish only 50% of the first epoch, does it mean it is expected to take about 5 hours to finish training on this example?
If so, I must use aws or a mac with gpu to try this example, is that right?
Is there a way to run this example with mac and cpu within a reasonable time?
Thanks a lot!
Yep, it's going to take a long time on CPU. If you go in make_data.py and change dev = True, then recreate the data and train the model, it will go much faster. This will reduce the amount of data you train on by 10x. Your performance may be lower but you should still get ok results (and sensible text if you use this in generative mode)
@noajshu Thank you very much! This example is great, I really want to see it in kur.
make_data.py
only takes a few secondskur -v train kurfile.yaml
only takes less than 4 mins, compared to default setting's estimated 5 hours trainingThanks!
Yes, reduced data set size takes less time to generate and train.
It seems data is not connected nor processed for the model yet.
simply run
kur -v train kurfile.yml
in the language model, I got the following error message:When and how can I try this example? Thanks a lot!