facebookresearch / ParlAI

A framework for training and evaluating AI models on a variety of openly available dialogue datasets.
https://parl.ai
MIT License
10.49k stars 2.1k forks source link

ConnectionError HTTPConnectionPool after having configured everything according to the parlai instructions. #4578

Closed AbreuY closed 2 years ago

AbreuY commented 2 years ago

Bug description After the second or third message with blenderbot2 / blenderbot2_400M I get this error.

requests.exceptions.ConnectionError: HTTPConnectionPool(host='0.0.0.0', port=8080): Max retries exceeded with url: / (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f59be3c8610>: Failed to establish a new connection: [Errno 111] Connection refused'))

Reproduction steps

I just installed it and even reinstalled parlai again according to the instructions

  1. Clone ParlAI Repository:

git clone https://github.com/facebookresearch/ParlAI.git ~/ParlAI

Install ParlAI:

cd ~/ParlAI; python setup.py develop

Expected behavior

It should work correctly. It should not break when we are chatting.

Logs Please paste the command line output:

(env) [root@host ParlAI]$ python -m parlai interactive --model-file zoo:blenderbot2/blenderbot2_400M/model --search_server 0.0.0.0:8080
15:26:22 | Overriding opt["model_file"] to /home/root/ParlAI/data/models/blenderbot2/blenderbot2_400M/model (previously: /checkpoint/kshuster/projects/knowledge_bot/kbot_memfix_sweep25_Fri_Jul__9/338/model.oss)
15:26:22 | Overriding opt["search_server"] to 0.0.0.0:8080 (previously: None)
15:26:22 | loading dictionary from /home/root/ParlAI/data/models/blenderbot2/blenderbot2_400M/model.dict
15:26:22 | num words = 50264
15:26:22 | BlenderBot2Fid: full interactive mode on.
15:26:42 | Creating the search engine retriever.
15:26:42 | No protocol provided, using "http://"
Some weights of the model checkpoint at bert-base-uncased were not used when initializing BertModel: ['cls.predictions.transform.LayerNorm.bias', 'cls.predictions.bias', 'cls.predictions.transform.LayerNorm.weight', 'cls.seq_relationship.weight', 'cls.seq_relationship.bias', 'cls.predictions.transform.dense.weight', 'cls.predictions.transform.dense.bias', 'cls.predictions.decoder.weight']
- This IS expected if you are initializing BertModel from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing BertModel from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
15:26:54 | Building Query Generator from file: /home/root/ParlAI/data/models/blenderbot2/query_generator/model
15:27:02 | Building Memory Decoder from file: /home/root/ParlAI/data/models/blenderbot2/memory_decoder/model
15:27:10 | Total parameters: 732,961,280 (406,286,336 trainable)
15:27:10 | Loading existing model params from /home/root/ParlAI/data/models/blenderbot2/blenderbot2_400M/model
15:27:13 | Opt:
15:27:13 |     activation: gelu
15:27:13 |     adafactor_eps: '[1e-30, 0.001]'
15:27:13 |     adam_eps: 1e-08
15:27:13 |     add_cleaned_reply_to_history: False
15:27:13 |     add_p1_after_newln: False
15:27:13 |     allow_missing_init_opts: False
15:27:13 |     attention_dropout: 0.1
15:27:13 |     batchsize: 12
15:27:13 |     beam_block_full_context: False
15:27:13 |     beam_block_list_filename: None
15:27:13 |     beam_block_ngram: 3
15:27:13 |     beam_context_block_ngram: 3
15:27:13 |     beam_delay: 30
15:27:13 |     beam_length_penalty: 0.65
15:27:13 |     beam_min_length: 20
15:27:13 |     beam_size: 10
15:27:13 |     betas: '[0.9, 0.999]'
15:27:13 |     bpe_add_prefix_space: None
15:27:13 |     bpe_debug: False
15:27:13 |     bpe_dropout: None
15:27:13 |     bpe_merge: None
15:27:13 |     bpe_vocab: None
15:27:13 |     candidates: inline
15:27:13 |     cap_num_predictions: 100
15:27:13 |     checkpoint_activations: False
15:27:13 |     codes_attention_num_heads: 4
15:27:13 |     codes_attention_type: basic
15:27:13 |     compressed_indexer_factory: IVF4096_HNSW128,PQ128
15:27:13 |     compressed_indexer_gpu_train: False
15:27:13 |     compressed_indexer_nprobe: 64
15:27:13 |     compute_tokenized_bleu: False
15:27:13 |     converting: False
15:27:13 |     data_parallel: False
15:27:13 |     datapath: /home/root/ParlAI/data
15:27:13 |     datatype: train:stream
15:27:13 |     delimiter: '\n'
15:27:13 |     dict_class: parlai.core.dict:DictionaryAgent
15:27:13 |     dict_endtoken: __end__
15:27:13 |     dict_file: /home/root/ParlAI/data/models/blenderbot2/blenderbot2_400M/model.dict
15:27:13 |     dict_initpath: None
15:27:13 |     dict_language: english
15:27:13 |     dict_loaded: True
15:27:13 |     dict_lower: False
15:27:13 |     dict_max_ngram_size: -1
15:27:13 |     dict_maxtokens: -1
15:27:13 |     dict_minfreq: 0
15:27:13 |     dict_nulltoken: __null__
15:27:13 |     dict_starttoken: __start__
15:27:13 |     dict_textfields: text,labels
15:27:13 |     dict_tokenizer: gpt2
15:27:13 |     dict_unktoken: __unk__
15:27:13 |     display_add_fields:
15:27:13 |     display_examples: False
15:27:13 |     display_prettify: False
15:27:13 |     doc_chunk_split_mode: word
15:27:13 |     doc_chunks_ranker: head
15:27:13 |     download_path: None
15:27:13 |     dpr_model_file: zoo:hallucination/bart_rag_token/model
15:27:13 |     dpr_num_docs: 25
15:27:13 |     dropout: 0.1
15:27:13 |     dynamic_batching: None
15:27:13 |     embedding_projection: random
15:27:13 |     embedding_size: 1024
15:27:13 |     embedding_type: random
15:27:13 |     embeddings_scale: True
15:27:13 |     encode_candidate_vecs: True
15:27:13 |     encode_candidate_vecs_batchsize: 256
15:27:13 |     eval_candidates: inline
15:27:13 |     ffn_size: 4096
15:27:13 |     fixed_candidate_vecs: reuse
15:27:13 |     fixed_candidates_path: None
15:27:13 |     force_fp16_tokens: True
15:27:13 |     fp16: False
15:27:13 |     fp16_impl: safe
15:27:13 |     generation_model: bart
15:27:13 |     gold_document_key: __selected-docs__
15:27:13 |     gold_document_titles_key: select-docs-titles
15:27:13 |     gold_knowledge_passage_key: checked_sentence
15:27:13 |     gold_knowledge_title_key: title
15:27:13 |     gold_sentence_key: __selected-sentences__
15:27:13 |     gpu: -1
15:27:13 |     gradient_clip: 0.1
15:27:13 |     hide_labels: False
15:27:13 |     history_add_global_end_token: None
15:27:13 |     history_reversed: False
15:27:13 |     history_size: -1
15:27:13 |     hnsw_ef_construction: 200
15:27:13 |     hnsw_ef_search: 128
15:27:13 |     hnsw_indexer_store_n: 128
15:27:13 |     ignore_bad_candidates: False
15:27:13 |     image_cropsize: 224
15:27:13 |     image_mode: raw
15:27:13 |     image_size: 256
15:27:13 |     indexer_buffer_size: 65536
15:27:13 |     indexer_type: compressed
15:27:13 |     inference: beam
15:27:13 |     init_fairseq_model: None
15:27:13 |     init_model: None
15:27:13 |     init_opt: None
15:27:13 |     insert_gold_docs: True
15:27:13 |     interactive_candidates: fixed
15:27:13 |     interactive_mode: True
15:27:13 |     interactive_task: True
15:27:13 |     invsqrt_lr_decay_gamma: -1
15:27:13 |     is_debug: False
15:27:13 |     knowledge_access_method: classify
15:27:13 |     label_truncate: 128
15:27:13 |     learn_embeddings: True
15:27:13 |     learn_positional_embeddings: True
15:27:13 |     learningrate: 1e-05
15:27:13 |     local_human_candidates_file: None
15:27:13 |     log_keep_fields: all
15:27:13 |     loglevel: info
15:27:13 |     lr_scheduler: reduceonplateau
15:27:13 |     lr_scheduler_decay: 0.5
15:27:13 |     lr_scheduler_patience: 1
15:27:13 |     max_doc_token_length: 256
15:27:13 |     memory_attention: sqrt
15:27:13 |     memory_decoder_beam_min_length: 10
15:27:13 |     memory_decoder_beam_size: 3
15:27:13 |     memory_decoder_delimiter: '\n'
15:27:13 |     memory_decoder_ignore_phrase: persona:
15:27:13 |     memory_decoder_key: full_text
15:27:13 |     memory_decoder_model_file: zoo:blenderbot2/memory_decoder/model
15:27:13 |     memory_decoder_one_line_memories: False
15:27:13 |     memory_decoder_truncate: -1
15:27:13 |     memory_doc_delimiter: :
15:27:13 |     memory_doc_title_delimiter: ' / '
15:27:13 |     memory_extractor_phrase: persona:
15:27:13 |     memory_key: personas
15:27:13 |     memory_reader_model: None
15:27:13 |     memory_retriever_truncate: -1
15:27:13 |     memory_writer_model: bert
15:27:13 |     memory_writer_model_file: zoo:hallucination/multiset_dpr/hf_bert_base.cp
15:27:13 |     min_doc_token_length: 64
15:27:13 |     model: projects.blenderbot2.agents.blenderbot2:BlenderBot2FidAgent
15:27:13 |     model_file: /home/root/ParlAI/data/models/blenderbot2/blenderbot2_400M/model
15:27:13 |     model_parallel: True
15:27:13 |     momentum: 0
15:27:13 |     multitask_weights: '[3.0, 1.0, 1.0, 1.0, 3.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0]'
15:27:13 |     n_decoder_layers: 12
15:27:13 |     n_docs: 5
15:27:13 |     n_encoder_layers: 12
15:27:13 |     n_extra_positions: 0
15:27:13 |     n_heads: 16
15:27:13 |     n_layers: 12
15:27:13 |     n_positions: 1024
15:27:13 |     n_ranked_doc_chunks: 1
15:27:13 |     n_segments: 0
15:27:13 |     nesterov: True
15:27:13 |     no_cuda: False
15:27:13 |     normalize_sent_emb: False
15:27:13 |     nus: [0.7]
15:27:13 |     optimizer: adamax
15:27:13 |     outfile:
15:27:13 |     output_conversion_path: None
15:27:13 |     output_scaling: 1.0
15:27:13 |     override: "{'model_file': '/home/root/ParlAI/data/models/blenderbot2/blenderbot2_400M/model', 'search_server': '0.0.0.0:8080'}"
15:27:13 |     parlai_home: /private/home/kshuster/ParlAI
15:27:13 |     path_to_dense_embeddings: None
15:27:13 |     path_to_dpr_passages: zoo:hallucination/wiki_passages/psgs_w100.tsv
15:27:13 |     path_to_index: zoo:hallucination/wiki_index_compressed/compressed_pq
15:27:13 |     person_tokens: False
15:27:13 |     poly_attention_num_heads: 4
15:27:13 |     poly_attention_type: basic
15:27:13 |     poly_faiss_model_file: None
15:27:13 |     poly_n_codes: 64
15:27:13 |     poly_score_initial_lambda: 0.5
15:27:13 |     polyencoder_init_model: wikito
15:27:13 |     polyencoder_type: codes
15:27:13 |     print_docs: False
15:27:13 |     query_generator_beam_min_length: 2
15:27:13 |     query_generator_beam_size: 1
15:27:13 |     query_generator_delimiter: '\n'
15:27:13 |     query_generator_ignore_phrase: persona:
15:27:13 |     query_generator_inference: beam
15:27:13 |     query_generator_key: full_text
15:27:13 |     query_generator_model_file: zoo:blenderbot2/query_generator/model
15:27:13 |     query_generator_truncate: -1
15:27:13 |     query_model: bert_from_parlai_rag
15:27:13 |     rag_model_type: token
15:27:13 |     rag_query_truncate: 512
15:27:13 |     rag_retriever_query: full_history
15:27:13 |     rag_retriever_type: search_engine
15:27:13 |     rag_turn_discount_factor: 1.0
15:27:13 |     rag_turn_marginalize: doc_then_turn
15:27:13 |     rag_turn_n_turns: 2
15:27:13 |     rank_candidates: False
15:27:13 |     rank_top_k: -1
15:27:13 |     reduction_type: mean
15:27:13 |     regret: False
15:27:13 |     regret_dict_file: None
15:27:13 |     regret_intermediate_maxlen: 32
15:27:13 |     regret_model_file: None
15:27:13 |     regret_override_index: False
15:27:13 |     relu_dropout: 0.0
15:27:13 |     repeat_blocking_heuristic: True
15:27:13 |     retriever_debug_index: None
15:27:13 |     retriever_delimiter: '\n'
15:27:13 |     retriever_embedding_size: 768
15:27:13 |     retriever_ignore_phrase: persona:
15:27:13 |     return_cand_scores: False
15:27:13 |     save_format: conversations
15:27:13 |     search_query_generator_beam_min_length: 2
15:27:13 |     search_query_generator_beam_size: 1
15:27:13 |     search_query_generator_inference: greedy
15:27:13 |     search_query_generator_model_file: zoo:blenderbot2/query_generator/model
15:27:13 |     search_query_generator_text_truncate: 512
15:27:13 |     search_server: 0.0.0.0:8080
15:27:13 |     share_encoders: True
15:27:13 |     share_search_and_memory_query_encoder: False
15:27:13 |     share_word_embeddings: True
15:27:13 |     single_turn: False
15:27:13 |     skip_generation: False
15:27:13 |     skip_retrieval_token: no_passages_used
15:27:13 |     skip_search_key: skip_search
15:27:13 |     special_tok_lst: None
15:27:13 |     split_lines: True
15:27:13 |     splitted_chunk_length: 256
15:27:13 |     starttime: Jul09_14-09
15:27:13 |     t5_dropout: 0.0
15:27:13 |     t5_generation_config: None
15:27:13 |     t5_model_arch: t5-base
15:27:13 |     t5_model_parallel: False
15:27:13 |     task: None
15:27:13 |     temperature: 1.0
15:27:13 |     text_truncate: 512
15:27:13 |     tfidf_max_doc_paragraphs: -1
15:27:13 |     tfidf_model_path: zoo:wikipedia_full/tfidf_retriever/model
15:27:13 |     thorough: False
15:27:13 |     topk: 10
15:27:13 |     topp: 0.9
15:27:13 |     train_predict: False
15:27:13 |     truncate: 512
15:27:13 |     update_freq: 1
15:27:13 |     use_memories: False
15:27:13 |     use_reply: label
15:27:13 |     variant: prelayernorm
15:27:13 |     verbose: False
15:27:13 |     warmup_rate: 0.0001
15:27:13 |     warmup_updates: -1
15:27:13 |     weight_decay: None
15:27:13 |     woi_doc_chunk_size: 500
15:27:13 |     wrap_memory_encoder: False
15:27:13 | Current ParlAI commit: 7660197631c06ee41c0ce25c5de37a74b11ecb77
15:27:13 | Current internal commit: 7660197631c06ee41c0ce25c5de37a74b11ecb77
15:27:13 | Current fb commit: 7660197631c06ee41c0ce25c5de37a74b11ecb77
Enter [DONE] if you want to end the episode, [EXIT] to quit.
15:27:13 | creating task(s): interactive
Enter Your Message: hello
/home/root/ParlAI/parlai/core/torch_generator_agent.py:1728: UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').
  hyp_ids = best_idxs // voc_size
[BlenderBot2Fid]: Hi, how are you doing? I'm doing well, thanks for asking. _POTENTIALLY_UNSAFE__
Enter Your Message: I'm fine too
[BlenderBot2Fid]: I am fine too. How are you today? I am doing well. How is your day going?
Enter Your Message: Thats great
[BlenderBot2Fid]: I am glad to hear that. What do you like to do for fun? I like to play video games.
Enter Your Message: I like programming
Traceback (most recent call last):
  File "/home/root/env/lib/python3.8/site-packages/urllib3-1.26.9-py3.8.egg/urllib3/connection.py", line 174, in _new_conn
    conn = connection.create_connection(
  File "/home/root/env/lib/python3.8/site-packages/urllib3-1.26.9-py3.8.egg/urllib3/util/connection.py", line 95, in create_connection
    raise err
  File "/home/root/env/lib/python3.8/site-packages/urllib3-1.26.9-py3.8.egg/urllib3/util/connection.py", line 85, in create_connection
    sock.connect(sa)
ConnectionRefusedError: [Errno 111] Connection refused

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/root/env/lib/python3.8/site-packages/urllib3-1.26.9-py3.8.egg/urllib3/connectionpool.py", line 703, in urlopen
    httplib_response = self._make_request(
  File "/home/root/env/lib/python3.8/site-packages/urllib3-1.26.9-py3.8.egg/urllib3/connectionpool.py", line 398, in _make_request
    conn.request(method, url, **httplib_request_kw)
  File "/home/root/env/lib/python3.8/site-packages/urllib3-1.26.9-py3.8.egg/urllib3/connection.py", line 239, in request
    super(HTTPConnection, self).request(method, url, body=body, headers=headers)
  File "/home/root/anaconda3/envs/python3/lib/python3.8/http/client.py", line 1256, in request
    self._send_request(method, url, body, headers, encode_chunked)
  File "/home/root/anaconda3/envs/python3/lib/python3.8/http/client.py", line 1302, in _send_request
    self.endheaders(body, encode_chunked=encode_chunked)
  File "/home/root/anaconda3/envs/python3/lib/python3.8/http/client.py", line 1251, in endheaders
    self._send_output(message_body, encode_chunked=encode_chunked)
  File "/home/root/anaconda3/envs/python3/lib/python3.8/http/client.py", line 1011, in _send_output
    self.send(msg)
  File "/home/root/anaconda3/envs/python3/lib/python3.8/http/client.py", line 951, in send
    self.connect()
  File "/home/root/env/lib/python3.8/site-packages/urllib3-1.26.9-py3.8.egg/urllib3/connection.py", line 205, in connect
    conn = self._new_conn()
  File "/home/root/env/lib/python3.8/site-packages/urllib3-1.26.9-py3.8.egg/urllib3/connection.py", line 186, in _new_conn
    raise NewConnectionError(
urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPConnection object at 0x7f59be3c8610>: Failed to establish a new connection: [Errno 111] Connection refused

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/root/env/lib/python3.8/site-packages/requests-2.27.1-py3.8.egg/requests/adapters.py", line 440, in send
    resp = conn.urlopen(
  File "/home/root/env/lib/python3.8/site-packages/urllib3-1.26.9-py3.8.egg/urllib3/connectionpool.py", line 785, in urlopen
    retries = retries.increment(
  File "/home/root/env/lib/python3.8/site-packages/urllib3-1.26.9-py3.8.egg/urllib3/util/retry.py", line 592, in increment
    raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='0.0.0.0', port=8080): Max retries exceeded with url: / (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f59be3c8610>: Failed to establish a new connection: [Errno 111] Connection refused'))

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/root/anaconda3/envs/python3/lib/python3.8/runpy.py", line 194, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/home/root/anaconda3/envs/python3/lib/python3.8/runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "/home/root/ParlAI/parlai/__main__.py", line 18, in <module>
    main()
  File "/home/root/ParlAI/parlai/__main__.py", line 14, in main
    superscript_main()
  File "/home/root/ParlAI/parlai/core/script.py", line 325, in superscript_main
    return SCRIPT_REGISTRY[cmd].klass._run_from_parser_and_opt(opt, parser)
  File "/home/root/ParlAI/parlai/core/script.py", line 108, in _run_from_parser_and_opt
    return script.run()
  File "/home/root/ParlAI/parlai/scripts/interactive.py", line 118, in run
    return interactive(self.opt)
  File "/home/root/ParlAI/parlai/scripts/interactive.py", line 93, in interactive
    world.parley()
  File "/home/root/ParlAI/parlai/tasks/interactive/worlds.py", line 89, in parley
    acts[1] = agents[1].act()
  File "/home/root/ParlAI/parlai/core/torch_agent.py", line 2148, in act
    response = self.batch_act([self.observation])[0]
  File "/home/root/ParlAI/parlai/core/torch_agent.py", line 2244, in batch_act
    output = self.eval_step(batch)
  File "/home/root/ParlAI/projects/blenderbot2/agents/blenderbot2.py", line 862, in eval_step
    output = super().eval_step(batch)
  File "/home/root/ParlAI/parlai/agents/rag/rag.py", line 300, in eval_step
    output = super().eval_step(batch)
  File "/home/root/ParlAI/parlai/core/torch_generator_agent.py", line 885, in eval_step
    beam_preds_scores, beams = self._generate(
  File "/home/root/ParlAI/parlai/agents/rag/rag.py", line 684, in _generate
    gen_outs = self._rag_generate(batch, beam_size, max_ts, prefix_tokens)
  File "/home/root/ParlAI/parlai/agents/rag/rag.py", line 727, in _rag_generate
    return self._generation_agent._generate(
  File "/home/root/ParlAI/parlai/core/torch_generator_agent.py", line 1132, in _generate
    encoder_states = model.encoder(*self._encoder_input(batch))
  File "/home/root/ParlAI/projects/blenderbot2/agents/modules.py", line 873, in encoder
    enc_out, mask, input_turns_cnt, top_docs, top_doc_scores = super().encoder(  # type: ignore
  File "/home/root/ParlAI/projects/blenderbot2/agents/modules.py", line 223, in encoder
    expanded_input, top_docs, top_doc_scores = self.retrieve_and_concat(
  File "/home/root/ParlAI/projects/blenderbot2/agents/modules.py", line 384, in retrieve_and_concat
    search_docs, search_doc_scores = self.perform_search(
  File "/home/root/ParlAI/projects/blenderbot2/agents/modules.py", line 549, in perform_search
    search_docs, search_doc_scores = self.retriever.retrieve(
  File "/home/root/ParlAI/parlai/agents/rag/retrievers.py", line 419, in retrieve
    docs, scores = self.retrieve_and_score(query)
  File "/home/root/ParlAI/parlai/agents/rag/retrievers.py", line 1215, in retrieve_and_score
    search_results_batch = self.search_client.retrieve(search_queries, self.n_docs)
  File "/home/root/ParlAI/parlai/agents/rag/retrieve_api.py", line 132, in retrieve
    return [self._retrieve_single(q, num_ret) for q in queries]
  File "/home/root/ParlAI/parlai/agents/rag/retrieve_api.py", line 132, in <listcomp>
    return [self._retrieve_single(q, num_ret) for q in queries]
  File "/home/root/ParlAI/parlai/agents/rag/retrieve_api.py", line 111, in _retrieve_single
    search_server_resp = self._query_search_server(search_query, num_ret)
  File "/home/root/ParlAI/parlai/agents/rag/retrieve_api.py", line 89, in _query_search_server
    server_response = requests.post(server, data=req)
  File "/home/root/env/lib/python3.8/site-packages/requests-2.27.1-py3.8.egg/requests/api.py", line 117, in post
    return request('post', url, data=data, json=json, **kwargs)
  File "/home/root/env/lib/python3.8/site-packages/requests-2.27.1-py3.8.egg/requests/api.py", line 61, in request
    return session.request(method=method, url=url, **kwargs)
  File "/home/root/env/lib/python3.8/site-packages/requests-2.27.1-py3.8.egg/requests/sessions.py", line 529, in request
    resp = self.send(prep, **send_kwargs)
  File "/home/root/env/lib/python3.8/site-packages/requests-2.27.1-py3.8.egg/requests/sessions.py", line 645, in send
    r = adapter.send(request, **kwargs)
  File "/home/root/env/lib/python3.8/site-packages/requests-2.27.1-py3.8.egg/requests/adapters.py", line 519, in send
    raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: HTTPConnectionPool(host='0.0.0.0', port=8080): Max retries exceeded with url: / (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f59be3c8610>: Failed to establish a new connection: [Errno 111] Connection refused'))
(env) [root@host ParlAI]$

Additional context I just running this command from ParlAI folder python -m parlai interactive --model-file zoo:blenderbot2/blenderbot2_400M/model --search_server 0.0.0.0:8080 I'm using ParlAI version 1.6.0

mojtaba-komeili commented 2 years ago

What I understand from the message is that it can not reach a search server. do you have a search server running on port 8080 in your local machine. Can you ping it to make sure it is running and reachable?

AbreuY commented 2 years ago

What I understand from the message is that it can not reach a search server. do you have a search server running on port 8080 in your local machine. Can you ping it to make sure it is running and reachable?

Hi, no. I don't have a search server running. Previously I ran this command. But this error did not happen. What would be a valid server for blenderbot2? I've searched the documentation but haven't found anything.

mojtaba-komeili commented 2 years ago

BlenderBot has multiple ways of running inference (memory, internet retrieval, none). The crash that you see may only happen agent decides to fetch something from internet. Please see this thread for more details on this issue.

AbreuY commented 2 years ago

BlenderBot has multiple ways of running inference (memory, internet retrieval, none). The crash that you see may only happen agent decides to fetch something from internet. Please see this thread for more details on this issue.

Thank you!.