allenai / deep_qa

A deep NLP library, based on Keras / tf, focused on question answering (but useful for other NLP too)
Apache License 2.0
404 stars 133 forks source link

Cannot run `bidaf_squad.json` #408

Closed schmmd closed 7 years ago

schmmd commented 7 years ago

Should I be able to run the following "out of the box" or do I need to train first?

# python scripts/run_model.py example_experiments/reading_comprehension/bidaf_squad.json  test
2017-06-20 16:36:40,791 - INFO - deep_qa.run - Loading model from parameter file: example_experiments/reading_comprehension/bidaf_squad.json
2017-06-20 16:36:40,831 - PARAM - deep_qa.common.params - random_seed = 13370
2017-06-20 16:36:40,831 - PARAM - deep_qa.common.params - numpy_seed = 1337
Using TensorFlow backend.
2017-06-20 16:36:43,750 - INFO - deep_qa.common.checks - Keras version: 2.0.4
2017-06-20 16:36:43,750 - INFO - deep_qa.common.checks - Tensorflow version: 1.1.0
2017-06-20 16:36:44,233 - PARAM - deep_qa.common.params - processor = {}
2017-06-20 16:36:44,234 - PARAM - deep_qa.common.params - processor.word_splitter = simple
2017-06-20 16:36:44,234 - PARAM - deep_qa.common.params - processor.word_filter = pass_through
2017-06-20 16:36:44,234 - PARAM - deep_qa.common.params - processor.word_stemmer = pass_through
2017-06-20 16:36:44,325 - PARAM - deep_qa.common.params - model_class = BidirectionalAttentionFlow
2017-06-20 16:36:44,325 - PARAM - deep_qa.common.params - encoder = ConfigTree([('word', ConfigTree([('type', 'cnn'), ('ngram_filter_sizes', [5]), ('num_filters', 100)]))])
2017-06-20 16:36:44,325 - INFO - deep_qa.common.params - Converting Params object to dict; logging of default values will not occur when dictionary parameters are used subsequently.
2017-06-20 16:36:44,325 - INFO - deep_qa.common.params - CURRENTLY DEFINED PARAMETERS: 
2017-06-20 16:36:44,325 - PARAM - deep_qa.common.params - encoder.word.type = cnn
2017-06-20 16:36:44,325 - PARAM - deep_qa.common.params - encoder.word.ngram_filter_sizes = [5]
2017-06-20 16:36:44,326 - PARAM - deep_qa.common.params - encoder.word.num_filters = 100
2017-06-20 16:36:44,326 - PARAM - deep_qa.common.params - num_hidden_seq2seq_layers = 2
2017-06-20 16:36:44,326 - PARAM - deep_qa.common.params - num_passage_words = None
2017-06-20 16:36:44,326 - PARAM - deep_qa.common.params - num_question_words = None
2017-06-20 16:36:44,326 - PARAM - deep_qa.common.params - num_highway_layers = 2
2017-06-20 16:36:44,326 - PARAM - deep_qa.common.params - highway_activation = relu
2017-06-20 16:36:44,326 - PARAM - deep_qa.common.params - similarity_function = {'combination': 'x,y,x*y', 'type': 'linear'}
2017-06-20 16:36:44,326 - INFO - deep_qa.common.params - Converting Params object to dict; logging of default values will not occur when dictionary parameters are used subsequently.
2017-06-20 16:36:44,326 - INFO - deep_qa.common.params - CURRENTLY DEFINED PARAMETERS: 
2017-06-20 16:36:44,327 - PARAM - deep_qa.common.params - similarity_function.combination = x,y,x*y
2017-06-20 16:36:44,327 - PARAM - deep_qa.common.params - similarity_function.type = linear
2017-06-20 16:36:44,327 - PARAM - deep_qa.common.params - embeddings = ConfigTree([('words', ConfigTree([('dimension', 100), ('pretrained_file', '/net/efs/aristo/dlfa/glove/glove.6B.100d.txt.gz'), ('project', True), ('fine_tune', False), ('dropout', 0.2)])), ('characters', ConfigTree([('dimension', 8), ('dropout', 0.2)]))])
2017-06-20 16:36:44,327 - PARAM - deep_qa.common.params - data_generator = ConfigTree([('dynamic_padding', True), ('adaptive_batch_sizes', True), ('adaptive_memory_usage_constant', 440000), ('maximum_batch_size', 60)])
2017-06-20 16:36:44,327 - PARAM - deep_qa.common.params - data_generator.dynamic_padding = True
2017-06-20 16:36:44,327 - PARAM - deep_qa.common.params - data_generator.padding_noise = 0.2
2017-06-20 16:36:44,327 - PARAM - deep_qa.common.params - data_generator.sort_every_epoch = True
2017-06-20 16:36:44,327 - PARAM - deep_qa.common.params - data_generator.adaptive_batch_sizes = True
2017-06-20 16:36:44,328 - PARAM - deep_qa.common.params - data_generator.adaptive_memory_usage_constant = 440000
2017-06-20 16:36:44,328 - PARAM - deep_qa.common.params - data_generator.maximum_batch_size = 60
2017-06-20 16:36:44,328 - PARAM - deep_qa.common.params - data_generator.biggest_batch_first = False
2017-06-20 16:36:44,328 - PARAM - deep_qa.common.params - dataset = {}
2017-06-20 16:36:44,328 - PARAM - deep_qa.common.params - dataset.type = text
2017-06-20 16:36:44,328 - PARAM - deep_qa.common.params - num_sentence_words = None
2017-06-20 16:36:44,328 - PARAM - deep_qa.common.params - num_word_characters = None
2017-06-20 16:36:44,328 - PARAM - deep_qa.common.params - tokenizer = {'type': 'words and characters'}
2017-06-20 16:36:44,328 - PARAM - deep_qa.common.params - tokenizer.type = words and characters
2017-06-20 16:36:44,329 - PARAM - deep_qa.common.params - tokenizer.processor = {}
2017-06-20 16:36:44,329 - PARAM - deep_qa.common.params - tokenizer.processor.word_splitter = simple
2017-06-20 16:36:44,329 - PARAM - deep_qa.common.params - tokenizer.processor.word_filter = pass_through
2017-06-20 16:36:44,329 - PARAM - deep_qa.common.params - tokenizer.processor.word_stemmer = pass_through
2017-06-20 16:36:44,329 - PARAM - deep_qa.common.params - encoder = ConfigTree([('word', ConfigTree([('type', 'cnn'), ('ngram_filter_sizes', [5]), ('num_filters', 100)]))])
2017-06-20 16:36:44,329 - PARAM - deep_qa.common.params - encoder_fallback_behavior = crash
2017-06-20 16:36:44,329 - PARAM - deep_qa.common.params - seq2seq_encoder = ConfigTree([('default', ConfigTree([('type', 'bi_gru'), ('encoder_params', ConfigTree([('units', 100)])), ('wrapper_params', ConfigTree())]))])
2017-06-20 16:36:44,329 - PARAM - deep_qa.common.params - seq2seq_encoder_fallback_behavior = crash
2017-06-20 16:36:44,330 - PARAM - deep_qa.common.params - train_files = ['/net/efs/aristo/dlfa/squad/processed/train.tsv']
2017-06-20 16:36:44,330 - PARAM - deep_qa.common.params - validation_files = ['/net/efs/aristo/dlfa/squad/processed/dev.tsv']
2017-06-20 16:36:44,330 - PARAM - deep_qa.common.params - test_files = None
2017-06-20 16:36:44,330 - PARAM - deep_qa.common.params - max_training_instances = None
2017-06-20 16:36:44,330 - PARAM - deep_qa.common.params - max_validation_instances = None
2017-06-20 16:36:44,330 - PARAM - deep_qa.common.params - max_test_instances = None
2017-06-20 16:36:44,330 - PARAM - deep_qa.common.params - train_steps_per_epoch = None
2017-06-20 16:36:44,330 - PARAM - deep_qa.common.params - train_steps_per_epoch = None
2017-06-20 16:36:44,330 - PARAM - deep_qa.common.params - train_steps_per_epoch = None
2017-06-20 16:36:44,331 - PARAM - deep_qa.common.params - save_models = True
2017-06-20 16:36:44,331 - PARAM - deep_qa.common.params - model_serialization_prefix = /net/efs/aristo/dlfa/models/bidaf
2017-06-20 16:36:44,340 - PARAM - deep_qa.common.params - num_gpus = 1
2017-06-20 16:36:44,340 - PARAM - deep_qa.common.params - validation_split = 0.1
2017-06-20 16:36:44,340 - PARAM - deep_qa.common.params - batch_size = 32
2017-06-20 16:36:44,340 - PARAM - deep_qa.common.params - num_epochs = 20
2017-06-20 16:36:44,341 - PARAM - deep_qa.common.params - optimizer = ConfigTree([('type', 'adadelta'), ('learning_rate', 0.5)])
2017-06-20 16:36:44,341 - PARAM - deep_qa.common.params - optimizer.type = adadelta
2017-06-20 16:36:44,341 - PARAM - deep_qa.common.params - gradient_clipping = {'value': 10, 'type': 'clip_by_norm'}
2017-06-20 16:36:44,341 - PARAM - deep_qa.common.params - loss = categorical_crossentropy
2017-06-20 16:36:44,341 - PARAM - deep_qa.common.params - metrics = ['accuracy']
2017-06-20 16:36:44,341 - PARAM - deep_qa.common.params - validation_metric = val_loss
2017-06-20 16:36:44,341 - PARAM - deep_qa.common.params - patience = 3
2017-06-20 16:36:44,342 - PARAM - deep_qa.common.params - fit_kwargs = {}
2017-06-20 16:36:44,342 - PARAM - deep_qa.common.params - tensorboard_log = None
2017-06-20 16:36:44,342 - PARAM - deep_qa.common.params - tensorboard_frequency = 0
2017-06-20 16:36:44,342 - PARAM - deep_qa.common.params - debug = {}
2017-06-20 16:36:44,342 - PARAM - deep_qa.common.params - show_summary_with_masking_info = False
2017-06-20 16:36:44,342 - INFO - deep_qa.training.trainer - Loading serialized model
Traceback (most recent call last):
  File "scripts/run_model.py", line 35, in <module>
    main()
  File "scripts/run_model.py", line 22, in main
    evaluate_model(sys.argv[1])
  File "scripts/../deep_qa/run.py", line 214, in evaluate_model
    model = load_model(param_path, model_class=model_class)
  File "scripts/../deep_qa/run.py", line 160, in load_model
    model.load_model()
  File "scripts/../deep_qa/training/trainer.py", line 389, in load_model
    model_config_file = open("%s_config.json" % self.model_prefix)
FileNotFoundError: [Errno 2] No such file or directory: '/net/efs/aristo/dlfa/models/bidaf_config.json'
schmmd commented 7 years ago

@DeNeutoy FYI

DeNeutoy commented 7 years ago

You have to train first; test model loads a pretrained model and evaluates on some data, see here

schmmd commented 7 years ago

Makes sense--I wasn't sure whether we stored the models from past trainings or not--thanks!