facebookresearch / ParlAI

A framework for training and evaluating AI models on a variety of openly available dialogue datasets.
https://parl.ai
MIT License
10.47k stars 2.09k forks source link

Wizard Generator having issues loading in public #1622

Closed stephenroller closed 5 years ago

stephenroller commented 5 years ago

Pulled in from bottom of #1587. Likely related to #1620.

Appears to be an issue that sometimes pretrained models in the zoo are having trouble with overriding options of some sort.

cc @jazzminewang

Steps to reproduce:

python examples/eval_model.py -bs 64 -t wizard_of_wikipedia:generator:topic_split -mf models:wizard_of_wikipedia/end2end_generator/model

Traceback:

/Users/jasminewang/Development/RLLab/mturk_dialog_eval/parlai/agents/transformer/transformer.py:18: UserWarning: Public release transformer models are currently in beta. The name of command line options may change or disappear before a stable release. We welcome your feedback. Please file feedback as issues at https://github.com/facebookresearch/ParlAI/issues/new
  "Public release transformer models are currently in beta. The name of "
[ warning: overriding opt['task'] to wizard_of_wikipedia:generator:topic_split (previously: wizard_of_wikipedia:generator:random_split )]
[ warning: overriding opt['model_file'] to /Users/jasminewang/Development/RLLab/mturk_dialog_eval/data/models/wizard_of_wikipedia/end2end_generator/model (previously: /tmp/wizard_endtoend_model )]
Dictionary: loading dictionary from /Users/jasminewang/Development/RLLab/mturk_dialog_eval/data/models/wizard_of_wikipedia/end2end_generator/model.dict
[ num words =  34883 ]
/private/home/roller/working/parlai/data/models/fasttext_vectors/wiki.en.vec: 0.00B [00:00, ?B/s]
Traceback (most recent call last):
  File "examples/eval_model.py", line 17, in <module>
    eval_model(opt, print_parser=parser)
  File "/Users/jasminewang/Development/RLLab/mturk_dialog_eval/parlai/scripts/eval_model.py", line 68, in eval_model
    agent = create_agent(opt, requireModelExists=True)
  File "/Users/jasminewang/Development/RLLab/mturk_dialog_eval/parlai/core/agents.py", line 552, in create_agent
    model = load_agent_module(opt)
  File "/Users/jasminewang/Development/RLLab/mturk_dialog_eval/parlai/core/agents.py", line 429, in load_agent_module
    return model_class(new_opt)
  File "/Users/jasminewang/Development/RLLab/mturk_dialog_eval/projects/wizard_of_wikipedia/generator/agents.py", line 99, in __init__
    super().__init__(opt, shared)
  File "/Users/jasminewang/Development/RLLab/mturk_dialog_eval/parlai/core/torch_generator_agent.py", line 358, in __init__
    self.build_model()
  File "/Users/jasminewang/Development/RLLab/mturk_dialog_eval/projects/wizard_of_wikipedia/generator/agents.py", line 305, in build_model
    self.model.embeddings.weight, self.opt['embedding_type']
  File "/Users/jasminewang/Development/RLLab/mturk_dialog_eval/parlai/core/torch_agent.py", line 917, in _copy_embeddings
    embs, name = self._get_embtype(emb_type)
  File "/Users/jasminewang/Development/RLLab/mturk_dialog_eval/parlai/core/torch_agent.py", line 859, in _get_embtype
    'models:fasttext_vectors'))
  File "/Users/jasminewang/anaconda3/envs/ParlAI/lib/python3.7/site-packages/torchtext/vocab.py", line 411, in __init__
    super(FastText, self).__init__(name, url=url, **kwargs)
  File "/Users/jasminewang/anaconda3/envs/ParlAI/lib/python3.7/site-packages/torchtext/vocab.py", line 280, in __init__
    self.cache(name, cache, url=url, max_vectors=max_vectors)
  File "/Users/jasminewang/anaconda3/envs/ParlAI/lib/python3.7/site-packages/torchtext/vocab.py", line 313, in cache
    urlretrieve(url, dest, reporthook=reporthook(t))
  File "/Users/jasminewang/anaconda3/envs/ParlAI/lib/python3.7/urllib/request.py", line 247, in urlretrieve
    with contextlib.closing(urlopen(url, data)) as fp:
  File "/Users/jasminewang/anaconda3/envs/ParlAI/lib/python3.7/urllib/request.py", line 222, in urlopen
    return opener.open(url, data, timeout)
  File "/Users/jasminewang/anaconda3/envs/ParlAI/lib/python3.7/urllib/request.py", line 531, in open
    response = meth(req, response)
  File "/Users/jasminewang/anaconda3/envs/ParlAI/lib/python3.7/urllib/request.py", line 641, in http_response
    'http', request, response, code, msg, hdrs)
  File "/Users/jasminewang/anaconda3/envs/ParlAI/lib/python3.7/urllib/request.py", line 569, in error
    return self._call_chain(*args)
  File "/Users/jasminewang/anaconda3/envs/ParlAI/lib/python3.7/urllib/request.py", line 503, in _call_chain
    result = func(*args)
  File "/Users/jasminewang/anaconda3/envs/ParlAI/lib/python3.7/urllib/request.py", line 649, in http_error_default
    raise HTTPError(req.full_url, code, msg, hdrs, fp)
urllib.error.HTTPError: HTTP Error 403: Forbidden

and without sudo:

Traceback (most recent call last):
  File "examples/eval_model.py", line 17, in <module>
    eval_model(opt, print_parser=parser)
  File "/Users/jasminewang/Development/RLLab/mturk_dialog_eval/parlai/scripts/eval_model.py", line 68, in eval_model
    agent = create_agent(opt, requireModelExists=True)
  File "/Users/jasminewang/Development/RLLab/mturk_dialog_eval/parlai/core/agents.py", line 552, in create_agent
    model = load_agent_module(opt)
  File "/Users/jasminewang/Development/RLLab/mturk_dialog_eval/parlai/core/agents.py", line 429, in load_agent_module
    return model_class(new_opt)
  File "/Users/jasminewang/Development/RLLab/mturk_dialog_eval/projects/wizard_of_wikipedia/generator/agents.py", line 99, in __init__
    super().__init__(opt, shared)
  File "/Users/jasminewang/Development/RLLab/mturk_dialog_eval/parlai/core/torch_generator_agent.py", line 358, in __init__
    self.build_model()
  File "/Users/jasminewang/Development/RLLab/mturk_dialog_eval/projects/wizard_of_wikipedia/generator/agents.py", line 305, in build_model
    self.model.embeddings.weight, self.opt['embedding_type']
  File "/Users/jasminewang/Development/RLLab/mturk_dialog_eval/parlai/core/torch_agent.py", line 917, in _copy_embeddings
    embs, name = self._get_embtype(emb_type)
  File "/Users/jasminewang/Development/RLLab/mturk_dialog_eval/parlai/core/torch_agent.py", line 859, in _get_embtype
    'models:fasttext_vectors'))
  File "/Users/jasminewang/anaconda3/envs/ParlAI/lib/python3.7/site-packages/torchtext/vocab.py", line 411, in __init__
    super(FastText, self).__init__(name, url=url, **kwargs)
  File "/Users/jasminewang/anaconda3/envs/ParlAI/lib/python3.7/site-packages/torchtext/vocab.py", line 280, in __init__
    self.cache(name, cache, url=url, max_vectors=max_vectors)
  File "/Users/jasminewang/anaconda3/envs/ParlAI/lib/python3.7/site-packages/torchtext/vocab.py", line 308, in cache
    os.makedirs(cache)
  File "/Users/jasminewang/anaconda3/envs/ParlAI/lib/python3.7/os.py", line 211, in makedirs
    makedirs(head, exist_ok=exist_ok)
  File "/Users/jasminewang/anaconda3/envs/ParlAI/lib/python3.7/os.py", line 211, in makedirs
    makedirs(head, exist_ok=exist_ok)
  File "/Users/jasminewang/anaconda3/envs/ParlAI/lib/python3.7/os.py", line 211, in makedirs
    makedirs(head, exist_ok=exist_ok)
  [Previous line repeated 3 more times]
  File "/Users/jasminewang/anaconda3/envs/ParlAI/lib/python3.7/os.py", line 221, in makedirs
    mkdir(name, mode)
PermissionError: [Errno 13] Permission denied: '/private/home'
stephenroller commented 5 years ago

@jazzminewang can you try

python examples/eval_model.py -bs 64 -t wizard_of_wikipedia:generator:topic_split -mf models:wizard_of_wikipedia/end2end_generator/model --datapath "/Users/jasminewang/Development/RLLab/mturk_dialog_eval/parlai/data"

Definitely don't use sudo here.

jazzminewang commented 5 years ago

Without sudo I get this error:

python examples/eval_model.py -bs 64 -t wizard_of_wikipedia:generator:topic_split -mf models:wizard_of_wikipedia/end2end_generator/model --datapath "/Users/jasminewang/Development/RLLab/mturk_dialog_eval/parlai/data"

[building data: /Users/jasminewang/Development/RLLab/mturk_dialog_eval/parlai/data/models/wizard_of_wikipedia/end2end_generator_0.tar.gz]
Traceback (most recent call last):
  File "examples/eval_model.py", line 16, in <module>
    opt = parser.parse_args(print_args=False)
  File "/home/ml/jwang301/Development/ParlAI/parlai/core/params.py", line 576, in parse_args
    self.add_extra_args(args)
  File "/home/ml/jwang301/Development/ParlAI/parlai/core/params.py", line 547, in add_extra_args
    model = get_model_name(parsed)
  File "/home/ml/jwang301/Development/ParlAI/parlai/core/params.py", line 28, in get_model_name
    model_file = modelzoo_path(opt.get('datapath'), model_file)
  File "/home/ml/jwang301/Development/ParlAI/parlai/core/build_data.py", line 252, in modelzoo_path
    download(datapath)
  File "/home/ml/jwang301/Development/ParlAI/parlai/zoo/wizard_of_wikipedia/end2end_generator.py", line 19, in download
    opt, fnames, 'wizard_of_wikipedia', version='v0.5', use_model_type=False
  File "/home/ml/jwang301/Development/ParlAI/parlai/core/build_data.py", line 214, in download_models
    make_dir(dpath)
  File "/home/ml/jwang301/Development/ParlAI/parlai/core/build_data.py", line 128, in make_dir
    os.makedirs(path, exist_ok=True)
  File "/home/ml/jwang301/anaconda2/envs/ParlAI/lib/python3.6/os.py", line 210, in makedirs
    makedirs(head, mode, exist_ok)
  File "/home/ml/jwang301/anaconda2/envs/ParlAI/lib/python3.6/os.py", line 210, in makedirs
    makedirs(head, mode, exist_ok)
  File "/home/ml/jwang301/anaconda2/envs/ParlAI/lib/python3.6/os.py", line 210, in makedirs
    makedirs(head, mode, exist_ok)
  [Previous line repeated 5 more times]
  File "/home/ml/jwang301/anaconda2/envs/ParlAI/lib/python3.6/os.py", line 220, in makedirs
    mkdir(name, mode)
PermissionError: [Errno 13] Permission denied: '/Users'
stephenroller commented 5 years ago

Oh my bad, I miswrote your username:

python examples/eval_model.py -bs 64 -t wizard_of_wikipedia:generator:topic_split -mf models:wizard_of_wikipedia/end2end_generator/model --datapath "/home/ml/jwang301/Development/ParlAI/parlai/data"

Althought it looks like before you were running on your laptop and now you're running on a server, but I'm not sure. You can also drop the datapath.

The first paste from the top of this issue looks like an issue with opts being inherited from the training environment (/private/home exists on my computer). The latter error looks like like your local file permissions are messed up, possibly from installing some things with sudo.

Opt issues definitely need to be fixed but I've had a tad bit of trouble recreating them (e.g. the test passes and that's run on a completely different computer than mine).

jazzminewang commented 5 years ago

Tried it again - looks like there are some fasttext issues:

(ParlAI) ➜  mturk_dialog_eval git:(jasmine) ✗ python examples/eval_model.py -bs 64 -t wizard_of_wikipedia:generator:topic_split -mf models:wizard_of_wikipedia/end2end_generator/model
/Users/jasminewang/Development/RLLab/mturk_dialog_eval/parlai/agents/transformer/transformer.py:18: UserWarning: Public release transformer models are currently in beta. The name of command line options may change or disappear before a stable release. We welcome your feedback. Please file feedback as issues at https://github.com/facebookresearch/ParlAI/issues/new
  "Public release transformer models are currently in beta. The name of "
********EVAL MODEL FUNCTION*****
{'show_advanced_args': False, 'task': 'wizard_of_wikipedia:generator:topic_split', 'download_path': '/Users/jasminewang/Development/RLLab/mturk_dialog_eval/downloads', 'datatype': 'valid', 'image_mode': 'raw', 'numthreads': 1, 'hide_labels': False, 'multitask_weights': [1], 'batchsize': 64, 'datapath': '/Users/jasminewang/Development/RLLab/mturk_dialog_eval/data', 'model': None, 'model_file': '/Users/jasminewang/Development/RLLab/mturk_dialog_eval/data/models/wizard_of_wikipedia/end2end_generator/model', 'init_model': None, 'dict_class': 'parlai.core.dict:DictionaryAgent', 'pytorch_teacher_task': None, 'pytorch_teacher_dataset': None, 'pytorch_datapath': None, 'numworkers': 4, 'pytorch_preprocess': False, 'pytorch_teacher_batch_sort': False, 'batch_sort_cache_type': 'pop', 'batch_length_range': 5, 'shuffle': False, 'batch_sort_field': 'text', 'pytorch_context_length': -1, 'pytorch_include_labels': True, 'num_examples': -1, 'display_examples': True, 'log_every_n_secs': 2, 'metrics': 'all', 'tensorboard_log': False, 'tensorboard_tag': None, 'tensorboard_metrics': None, 'tensorboard_comment': '', 'image_size': 256, 'image_cropsize': 224, 'label_type': 'response', 'include_knowledge': True, 'include_checked_sentence': True, 'include_knowledge_separator': True, 'only_checked_knowledge': False, 'ignorant_dropout': 0.0, 'embedding_size': 256, 'n_layers': 5, 'ffn_size': 512, 'dropout': 0.2, 'attention_dropout': 0.0, 'relu_dropout': 0.0, 'n_heads': 2, 'learn_positional_embeddings': False, 'embeddings_scale': True, 'n_positions': None, 'beam_size': 1, 'beam_dot_log': False, 'beam_min_n_best': 3, 'beam_min_length': 1, 'beam_block_ngram': 0, 'skip_generation': False, 'embedding_type': 'random', 'embedding_projection': 'random', 'fp16': False, 'optimizer': 'adam', 'learningrate': 0.0005, 'gradient_clip': 0.1, 'momentum': 0, 'nesterov': True, 'nus': (0.7,), 'betas': (0.9, 0.98), 'lr_scheduler': 'invsqrt', 'lr_scheduler_patience': 3, 'lr_scheduler_decay': 0.5, 'warmup_updates': 5000, 'warmup_rate': 0.0001, 'update_freq': -1, 'rank_candidates': False, 'truncate': 128, 'text_truncate': None, 'label_truncate': None, 'history_size': -1, 'person_tokens': False, 'split_lines': False, 'use_reply': 'label', 'add_p1_after_newln': False, 'delimiter': '\n', 'gpu': -1, 'no_cuda': False, 'dict_file': None, 'dict_initpath': None, 'dict_language': 'english', 'dict_max_ngram_size': -1, 'dict_minfreq': 0, 'dict_maxtokens': -1, 'dict_nulltoken': '__null__', 'dict_starttoken': '__start__', 'dict_endtoken': '__end__', 'dict_unktoken': '__unk__', 'dict_tokenizer': 're', 'dict_lower': False, 'bpe_debug': False, 'dict_textfields': 'text,labels,chosen_topic,checked_sentence,knowledge,title', 'knowledge_truncate': 32, 'max_knowledge': None, 'knowledge_alpha': 0.95, 'clip_norm': 0.1, 'parlai_home': '/Users/jasminewang/Development/RLLab/mturk_dialog_eval', 'override': {'batchsize': 64, 'task': 'wizard_of_wikipedia:generator:topic_split', 'model_file': '/Users/jasminewang/Development/RLLab/mturk_dialog_eval/data/models/wizard_of_wikipedia/end2end_generator/model'}, 'starttime': 'Apr23_08-25'}
[ warning: overriding opt['task'] to wizard_of_wikipedia:generator:topic_split (previously: wizard_of_wikipedia:generator:random_split )]
[ warning: overriding opt['model_file'] to /Users/jasminewang/Development/RLLab/mturk_dialog_eval/data/models/wizard_of_wikipedia/end2end_generator/model (previously: /tmp/wizard_endtoend_model )]
Dictionary: loading dictionary from /Users/jasminewang/Development/RLLab/mturk_dialog_eval/data/models/wizard_of_wikipedia/end2end_generator/model.dict
[ num words =  34883 ]
/private/home/roller/working/parlai/data/models/fasttext_vectors/wiki.en.vec: 0.00B [00:00, ?B/s]
Traceback (most recent call last):
  File "examples/eval_model.py", line 17, in <module>
    eval_model(opt, print_parser=parser)
  File "/Users/jasminewang/Development/RLLab/mturk_dialog_eval/parlai/scripts/eval_model.py", line 72, in eval_model
    agent = create_agent(opt, requireModelExists=True)
  File "/Users/jasminewang/Development/RLLab/mturk_dialog_eval/parlai/core/agents.py", line 552, in create_agent
    model = load_agent_module(opt)
  File "/Users/jasminewang/Development/RLLab/mturk_dialog_eval/parlai/core/agents.py", line 429, in load_agent_module
    return model_class(new_opt)
  File "/Users/jasminewang/Development/RLLab/mturk_dialog_eval/projects/wizard_of_wikipedia/generator/agents.py", line 99, in __init__
    super().__init__(opt, shared)
  File "/Users/jasminewang/Development/RLLab/mturk_dialog_eval/parlai/core/torch_generator_agent.py", line 358, in __init__
    self.build_model()
  File "/Users/jasminewang/Development/RLLab/mturk_dialog_eval/projects/wizard_of_wikipedia/generator/agents.py", line 305, in build_model
    self.model.embeddings.weight, self.opt['embedding_type']
  File "/Users/jasminewang/Development/RLLab/mturk_dialog_eval/parlai/core/torch_agent.py", line 917, in _copy_embeddings
    embs, name = self._get_embtype(emb_type)
  File "/Users/jasminewang/Development/RLLab/mturk_dialog_eval/parlai/core/torch_agent.py", line 859, in _get_embtype
    'models:fasttext_vectors'))
  File "/Users/jasminewang/anaconda3/envs/ParlAI/lib/python3.7/site-packages/torchtext/vocab.py", line 411, in __init__
    super(FastText, self).__init__(name, url=url, **kwargs)
  File "/Users/jasminewang/anaconda3/envs/ParlAI/lib/python3.7/site-packages/torchtext/vocab.py", line 280, in __init__
    self.cache(name, cache, url=url, max_vectors=max_vectors)
  File "/Users/jasminewang/anaconda3/envs/ParlAI/lib/python3.7/site-packages/torchtext/vocab.py", line 313, in cache
    urlretrieve(url, dest, reporthook=reporthook(t))
  File "/Users/jasminewang/anaconda3/envs/ParlAI/lib/python3.7/urllib/request.py", line 247, in urlretrieve
    with contextlib.closing(urlopen(url, data)) as fp:
  File "/Users/jasminewang/anaconda3/envs/ParlAI/lib/python3.7/urllib/request.py", line 222, in urlopen
    return opener.open(url, data, timeout)
  File "/Users/jasminewang/anaconda3/envs/ParlAI/lib/python3.7/urllib/request.py", line 531, in open
    response = meth(req, response)
  File "/Users/jasminewang/anaconda3/envs/ParlAI/lib/python3.7/urllib/request.py", line 641, in http_response
    'http', request, response, code, msg, hdrs)
  File "/Users/jasminewang/anaconda3/envs/ParlAI/lib/python3.7/urllib/request.py", line 569, in error
    return self._call_chain(*args)
  File "/Users/jasminewang/anaconda3/envs/ParlAI/lib/python3.7/urllib/request.py", line 503, in _call_chain
    result = func(*args)
  File "/Users/jasminewang/anaconda3/envs/ParlAI/lib/python3.7/urllib/request.py", line 649, in http_error_default
    raise HTTPError(req.full_url, code, msg, hdrs, fp)
urllib.error.HTTPError: HTTP Error 403: Forbidden
stephenroller commented 5 years ago

And you're running on master? Because I thought this was fixed in #1598.

jazzminewang commented 5 years ago

Apologies - realized my laptop's local version wasn't synced. It looks like the model file is missing.

(ParlAI) ➜  mturk_dialog_eval git:(jasmine) ✗ python examples/eval_model.py -bs 64 -t wizard_of_wikipedia:generator:topic_split -mf models:wizard_of_wikipedia/end2end_generator/model
/Users/jasminewang/Development/RLLab/mturk_dialog_eval/parlai/agents/transformer/transformer.py:18: UserWarning: Public release transformer models are currently in beta. The name of command line options may change or disappear before a stable release. We welcome your feedback. Please file feedback as issues at https://github.com/facebookresearch/ParlAI/issues/new
  "Public release transformer models are currently in beta. The name of "
********EVAL MODEL FUNCTION*****
{'show_advanced_args': False, 'task': 'wizard_of_wikipedia:generator:topic_split', 'download_path': '/Users/jasminewang/Development/RLLab/mturk_dialog_eval/downloads', 'datatype': 'valid', 'image_mode': 'raw', 'numthreads': 1, 'hide_labels': False, 'multitask_weights': [1], 'batchsize': 64, 'datapath': '/Users/jasminewang/Development/RLLab/mturk_dialog_eval/data', 'model': None, 'model_file': '/Users/jasminewang/Development/RLLab/mturk_dialog_eval/data/models/wizard_of_wikipedia/end2end_generator/model', 'init_model': None, 'dict_class': 'parlai.core.dict:DictionaryAgent', 'pytorch_teacher_task': None, 'pytorch_teacher_dataset': None, 'pytorch_datapath': None, 'numworkers': 4, 'pytorch_preprocess': False, 'pytorch_teacher_batch_sort': False, 'batch_sort_cache_type': 'pop', 'batch_length_range': 5, 'shuffle': False, 'batch_sort_field': 'text', 'pytorch_context_length': -1, 'pytorch_include_labels': True, 'num_examples': -1, 'display_examples': True, 'log_every_n_secs': 2, 'metrics': 'all', 'tensorboard_log': False, 'tensorboard_tag': None, 'tensorboard_metrics': None, 'tensorboard_comment': '', 'image_size': 256, 'image_cropsize': 224, 'label_type': 'response', 'include_knowledge': True, 'include_checked_sentence': True, 'include_knowledge_separator': True, 'only_checked_knowledge': False, 'ignorant_dropout': 0.0, 'embedding_size': 256, 'n_layers': 5, 'ffn_size': 512, 'dropout': 0.2, 'attention_dropout': 0.0, 'relu_dropout': 0.0, 'n_heads': 2, 'learn_positional_embeddings': False, 'embeddings_scale': True, 'n_positions': None, 'n_segments': 0, 'variant': 'aiayn', 'activation': 'relu', 'beam_size': 1, 'beam_dot_log': False, 'beam_min_n_best': 3, 'beam_min_length': 1, 'beam_block_ngram': 0, 'skip_generation': False, 'embedding_type': 'random', 'embedding_projection': 'random', 'fp16': False, 'optimizer': 'adam', 'learningrate': 0.0005, 'gradient_clip': 0.1, 'momentum': 0, 'nesterov': True, 'nus': (0.7,), 'betas': (0.9, 0.98), 'lr_scheduler': 'invsqrt', 'lr_scheduler_patience': 3, 'lr_scheduler_decay': 0.5, 'warmup_updates': 5000, 'warmup_rate': 0.0001, 'update_freq': -1, 'rank_candidates': False, 'truncate': 128, 'text_truncate': None, 'label_truncate': None, 'history_size': -1, 'person_tokens': False, 'split_lines': False, 'use_reply': 'label', 'add_p1_after_newln': False, 'delimiter': '\n', 'gpu': -1, 'no_cuda': False, 'dict_file': None, 'dict_initpath': None, 'dict_language': 'english', 'dict_max_ngram_size': -1, 'dict_minfreq': 0, 'dict_maxtokens': -1, 'dict_nulltoken': '__null__', 'dict_starttoken': '__start__', 'dict_endtoken': '__end__', 'dict_unktoken': '__unk__', 'dict_tokenizer': 're', 'dict_lower': False, 'bpe_debug': False, 'dict_textfields': 'text,labels,chosen_topic,checked_sentence,knowledge,title', 'knowledge_truncate': 32, 'max_knowledge': None, 'knowledge_alpha': 0.95, 'clip_norm': 0.1, 'parlai_home': '/Users/jasminewang/Development/RLLab/mturk_dialog_eval', 'override': {'batchsize': 64, 'task': 'wizard_of_wikipedia:generator:topic_split', 'model_file': '/Users/jasminewang/Development/RLLab/mturk_dialog_eval/data/models/wizard_of_wikipedia/end2end_generator/model'}, 'starttime': 'Apr23_12-33'}
Traceback (most recent call last):
  File "examples/eval_model.py", line 17, in <module>
    eval_model(opt, print_parser=parser)
  File "/Users/jasminewang/Development/RLLab/mturk_dialog_eval/parlai/scripts/eval_model.py", line 72, in eval_model
    agent = create_agent(opt, requireModelExists=True)
  File "/Users/jasminewang/Development/RLLab/mturk_dialog_eval/parlai/core/agents.py", line 567, in create_agent
    'sure it is correct: {}'.format(opt['model_file']))
RuntimeError: WARNING: Model file does not exist, check to make sure it is correct: /Users/jasminewang/Development/RLLab/mturk_dialog_eval/data/models/wizard_of_wikipedia/end2end_generator/model
stephenroller commented 5 years ago

Delete /Users/jasminewang/Development/RLLab/mturk_dialog_eval/data/models/wizard_of_wikipedia and try once more please

stephenroller commented 5 years ago

Hi Jasmine, let's see if #1666 helps you.

stephenroller commented 5 years ago

I believe this is resolved, but please reopen if you have trouble @jazzminewang.

jazzminewang commented 5 years ago

I synced with remote and seeing this: (ParlAI) ➜ ParlAI git:(master) ✗ python examples/eval_model.py -bs 64 -t wizard_of_wikipedia:generator:topic_split -mf models:wizard_of_wikipedia/end2end_generator/model /Users/jasminewang/Development/RLLab/mturk_dialog_eval/parlai/agents/transformer/transformer.py:18: UserWarning: Public release transformer models are currently in beta. The name of command line options may change or disappear before a stable release. We welcome your feedback. Please file feedback as issues at https://github.com/facebookresearch/ParlAI/issues/new "Public release transformer models are currently in beta. The name of " ****EVAL MODEL FUNCTION***** {'show_advanced_args': False, 'task': 'wizard_of_wikipedia:generator:topic_split', 'download_path': '/Users/jasminewang/Development/RLLab/mturk_dialog_eval/downloads', 'datatype': 'valid', 'image_mode': 'raw', 'numthreads': 1, 'hide_labels': False, 'multitask_weights': [1], 'batchsize': 64, 'datapath': '/Users/jasminewang/Development/RLLab/mturk_dialog_eval/data', 'model': None, 'model_file': '/Users/jasminewang/Development/RLLab/mturk_dialog_eval/data/models/wizard_of_wikipedia/end2end_generator/model', 'init_model': None, 'dict_class': 'parlai.core.dict:DictionaryAgent', 'pytorch_teacher_task': None, 'pytorch_teacher_dataset': None, 'pytorch_datapath': None, 'numworkers': 4, 'pytorch_preprocess': False, 'pytorch_teacher_batch_sort': False, 'batch_sort_cache_type': 'pop', 'batch_length_range': 5, 'shuffle': False, 'batch_sort_field': 'text', 'pytorch_context_length': -1, 'pytorch_include_labels': True, 'num_examples': -1, 'display_examples': True, 'log_every_n_secs': 2, 'metrics': 'all', 'tensorboard_log': False, 'tensorboard_tag': None, 'tensorboard_metrics': None, 'tensorboard_comment': '', 'image_size': 256, 'image_cropsize': 224, 'label_type': 'response', 'include_knowledge': True, 'include_checked_sentence': True, 'include_knowledge_separator': True, 'only_checked_knowledge': False, 'ignorant_dropout': 0.0, 'embedding_size': 256, 'n_layers': 5, 'ffn_size': 512, 'dropout': 0.2, 'attention_dropout': 0.0, 'relu_dropout': 0.0, 'n_heads': 2, 'learn_positional_embeddings': False, 'embeddings_scale': True, 'n_positions': None, 'n_segments': 0, 'variant': 'aiayn', 'activation': 'relu', 'beam_size': 1, 'beam_dot_log': False, 'beam_min_n_best': 3, 'beam_min_length': 1, 'beam_block_ngram': 0, 'skip_generation': False, 'embedding_type': 'random', 'embedding_projection': 'random', 'fp16': False, 'optimizer': 'adam', 'learningrate': 0.0005, 'gradient_clip': 0.1, 'momentum': 0, 'nesterov': True, 'nus': (0.7,), 'betas': (0.9, 0.98), 'lr_scheduler': 'invsqrt', 'lr_scheduler_patience': 3, 'lr_scheduler_decay': 0.5, 'warmup_updates': 5000, 'warmup_rate': 0.0001, 'update_freq': -1, 'rank_candidates': False, 'truncate': 128, 'text_truncate': None, 'label_truncate': None, 'history_size': -1, 'person_tokens': False, 'split_lines': False, 'use_reply': 'label', 'add_p1_after_newln': False, 'delimiter': '\n', 'gpu': -1, 'no_cuda': False, 'dict_file': None, 'dict_initpath': None, 'dict_language': 'english', 'dict_max_ngram_size': -1, 'dict_minfreq': 0, 'dict_maxtokens': -1, 'dict_nulltoken': 'null', 'dict_starttoken': 'start', 'dict_endtoken': 'end', 'dict_unktoken': 'unk', 'dict_tokenizer': 're', 'dict_lower': False, 'bpe_debug': False, 'dict_textfields': 'text,labels,chosen_topic,checked_sentence,knowledge,title', 'knowledge_truncate': 32, 'max_knowledge': None, 'knowledge_alpha': 0.95, 'clip_norm': 0.1, 'parlai_home': '/Users/jasminewang/Development/RLLab/mturk_dialog_eval', 'override': {'batchsize': 64, 'task': 'wizard_of_wikipedia:generator:topic_split', 'model_file': '/Users/jasminewang/Development/RLLab/mturk_dialog_eval/data/models/wizard_of_wikipedia/end2end_generator/model'}, 'starttime': 'May04_07-30'} [ warning: overriding opt['task'] to wizard_of_wikipedia:generator:topic_split (previously: wizard_of_wikipedia:generator:random_split )] [ warning: overriding opt['model_file'] to /Users/jasminewang/Development/RLLab/mturk_dialog_eval/data/models/wizard_of_wikipedia/end2end_generator/model (previously: /tmp/wizard_endtoend_model )] Dictionary: loading dictionary from /Users/jasminewang/Development/RLLab/mturk_dialog_eval/data/models/wizard_of_wikipedia/end2end_generator/model.dict [ num words = 34883 ] /private/home/roller/working/parlai/data/models/fasttext_vectors/wiki.en.vec: 0.00B [00:00, ?B/s] Traceback (most recent call last): File "examples/eval_model.py", line 17, in eval_model(opt, print_parser=parser) File "/Users/jasminewang/Development/RLLab/mturk_dialog_eval/parlai/scripts/eval_model.py", line 72, in eval_model agent = create_agent(opt, requireModelExists=True) File "/Users/jasminewang/Development/RLLab/mturk_dialog_eval/parlai/core/agents.py", line 570, in create_agent model = load_agent_module(opt)

File "/Users/jasminewang/Development/RLLab/mturk_dialog_eval/parlai/core/agents.py", line 442, in load_agent_module return model_class(new_opt) File "/Users/jasminewang/Development/RLLab/mturk_dialog_eval/projects/wizard_of_wikipedia/generator/agents.py", line 99, in init super().init(opt, shared) File "/Users/jasminewang/Development/RLLab/mturk_dialog_eval/parlai/core/torch_generator_agent.py", line 351, in init self.build_model() File "/Users/jasminewang/Development/RLLab/mturk_dialog_eval/projects/wizard_of_wikipedia/generator/agents.py", line 305, in build_model self.model.embeddings.weight, self.opt['embedding_type'] File "/Users/jasminewang/Development/RLLab/mturk_dialog_eval/parlai/core/torch_agent.py", line 925, in _copy_embeddings embs, name = self._get_embtype(emb_type) File "/Users/jasminewang/Development/RLLab/mturk_dialog_eval/parlai/core/torch_agent.py", line 867, in _get_embtype embs = download(self.opt.get('datapath')) File "/Users/jasminewang/Development/RLLab/mturk_dialog_eval/parlai/zoo/fasttext_vectors/build.py", line 19, in download cache=modelzoo_path(datapath, 'models:fasttext_vectors'), File "/Users/jasminewang/anaconda3/envs/ParlAI/lib/python3.7/site-packages/torchtext/vocab.py", line 280, in init self.cache(name, cache, url=url, max_vectors=max_vectors) File "/Users/jasminewang/anaconda3/envs/ParlAI/lib/python3.7/site-packages/torchtext/vocab.py", line 313, in cache urlretrieve(url, dest, reporthook=reporthook(t)) File "/Users/jasminewang/anaconda3/envs/ParlAI/lib/python3.7/urllib/request.py", line 257, in urlretrieve tfp = open(filename, 'wb') PermissionError: [Errno 13] Permission denied: '/private/home/roller/working/parlai/data/models/fasttext_vectors/wiki.en.vec'

jazzminewang commented 5 years ago

Also, can you confirm if this is the "End-to-end Transformer MemNet" or the "Vanilla Transformer"?

stephenroller commented 5 years ago

This is the end to end model.

stephenroller commented 5 years ago

Based on #1702, someone else has gotten the download working and running, @jazzminewang (@koustuvsinha did too, but likely on the same machines as me).

Given that you ran a bunch of commands with sudo at some point, I'd ask you try deleting ParlAI from conda and reinstalling it, or otherwise again check your permissions everywhere. Based on your environment, it looks like you installed ParlAI instead of linking to the source directory, so they might actually be out of sync. Best case could be trying a new conda env.