deepgram / kur

Descriptive Deep Learning
Apache License 2.0
814 stars 107 forks source link

Is character-level RNN example not ready to use yet? #28

Closed EmbraceLife closed 7 years ago

EmbraceLife commented 7 years ago

It seems data is not connected nor processed for the model yet.

simply run kur -v train kurfile.yml in the language model, I got the following error message:

Traceback (most recent call last):
  File "/Users/Natsume/miniconda2/envs/dlnd-tf-lab/bin/kur", line 11, in <module>
    load_entry_point('kur', 'console_scripts', 'kur')()
  File "/Users/Natsume/Downloads/kur_road/kur/kur/__main__.py", line 382, in main
    sys.exit(args.func(args) or 0)
  File "/Users/Natsume/Downloads/kur_road/kur/kur/__main__.py", line 61, in train
    func = spec.get_training_function()
  File "/Users/Natsume/Downloads/kur_road/kur/kur/kurfile.py", line 259, in get_training_function
    provider = self.get_provider('train')
  File "/Users/Natsume/Downloads/kur_road/kur/kur/kurfile.py", line 240, in get_provider
    sources=Supplier.merge_suppliers(suppliers),
  File "/Users/Natsume/Downloads/kur_road/kur/kur/supplier/supplier.py", line 130, in merge_suppliers
    sources = supplier.get_sources()
  File "/Users/Natsume/Downloads/kur_road/kur/kur/supplier/jsonl_supplier.py", line 80, in get_sources
    self._load()
  File "/Users/Natsume/Downloads/kur_road/kur/kur/supplier/jsonl_supplier.py", line 59, in _load
    with open(self.source, 'r') as infile:
FileNotFoundError: [Errno 2] No such file or directory: '../data/train.jsonl'

When and how can I try this example? Thanks a lot!

noajshu commented 7 years ago

I would like to add some documentation for this example, but it is ready to try! You should be able to run bash steps.sh This will process the data, show you the data, train, and show you the outputs of the model. If it doesn't work, please post tracebacks. You can execute the steps one by one to see what's going on and debug.

EmbraceLife commented 7 years ago

it is working, thanks! I am looking forward to your docs too. I ran bash steps.sh, though it works, it seems very slow. for each epoch it estimate over 30 minutes run.

EmbraceLife commented 7 years ago

In fact, I trained 30 minutes, and could just finish half of an epoch. In order to make it run faster, I want to sample a small subset of the data, so I added provider to train and validate sections as below

train:
  data:
    - jsonl: ../data/train.jsonl

  provider:
    num_batches: 2
  epochs: 1
  weights:
    initial: inital.w.kur
    best: best.w.kur
    last: last.w.kur

  log: log

validate:
  data:
    - jsonl: ../data/validate.jsonl

  provider: 
    num_batches: 1
  weights:
    initial: inital.w.kur
    best: best.w.kur
    last: last.w.kur

However, loading log still take a long time. Is it normal, why is it? Is there a way for me to train a small subset of data and make it fast for experiment?

Thanks

ajsyp commented 7 years ago

The log file is probably just large-ish. Shouldn't take long to load, though. Try deleting the log. If you don't want a log, just remove log: log entirely, or try logging less data with:

log:
  path: log
  keep_batch: no
EmbraceLife commented 7 years ago

Hi @noajshu when I train with the default kurfile on mac, it took me 30 minutes to finish only 50% of the first epoch, does it mean it is expected to take about 5 hours to finish training on this example?

If so, I must use aws or a mac with gpu to try this example, is that right?

Is there a way to run this example with mac and cpu within a reasonable time?

Thanks a lot!

noajshu commented 7 years ago

Yep, it's going to take a long time on CPU. If you go in make_data.py and change dev = True, then recreate the data and train the model, it will go much faster. This will reduce the amount of data you train on by 10x. Your performance may be lower but you should still get ok results (and sensible text if you use this in generative mode)

EmbraceLife commented 7 years ago

@noajshu Thank you very much! This example is great, I really want to see it in kur.

Thanks!

noajshu commented 7 years ago

Yes, reduced data set size takes less time to generate and train.