abisee / cnn-dailymail

Code to obtain the CNN / Daily Mail dataset (non-anonymized) for summarization
MIT License
635 stars 306 forks source link

Important fix (untokenized data written to .bin files) #2

Closed abisee closed 7 years ago

abisee commented 7 years ago

This is a notification that the code to obtain the CNN / Daily Mail dataset unfortunately had a bug which caused the untokenized data to be written to the .bin files (not the tokenized data, as intended). The fix has been committed here.

If you've already created your .bin and vocab files, I advise you to recreate them. To do this:

If you've already begun training with the Tensorflow code, I advise you to restart training with the new datafiles. Switching the vocab and .bin files mid-training will not work.

Apologies for the inconvenience.

Tagging people to whom this may be relevant: @prokopevaleksey @tianjianjiang @StevenLOL @MrGLaDOS @hate5six @liuchen11 @bugtig @ayushoriginal @BenJamesbabala @BinbinBian @caomw @halolimat @ml-lab @ParseThis @qiang2100 @scylla @tonydeep @yiqingyang2012 @YuxuanHuang @rahul-iisc @pj-parag

yangze01 commented 7 years ago

when I run the code , it always print "Tried to find tokenized story file 9bfbb6ede20df9611c2a8b42980629658dc5ec23.story in both directories cnn_stories_tokenized and dm_stories_tokenized.“ Couldn't find it. how can i fix it?

abisee commented 7 years ago

@yangze01 That error message means that it's trying to find a .story file in cnn_stories_tokenized or dm_stories_tokenized but the file is in neither. Those directories should contain a tokenized version of every file in the original cnn / dailymail stories directories you passed into make_datafile.py.

You probably had some error during tokenization that resulted in an incomplete set of tokenized files in cnn_stories_tokenized and dm_stories_tokenized.

The latest commit now has more informative checks and error messages.

yangze01 commented 7 years ago

@abisee Thx. I downloads the orinal cnn/dailymail stories, and it works.