jina-ai / clip-as-service

🏄 Scalable embedding, reasoning, ranking for images and sentences with CLIP
https://clip-as-service.jina.ai
Other
12.48k stars 2.07k forks source link

bert-serving-start gives "TypeError: 'NoneType' object is not iterable" error for multilingual bert #569

Open shashi-netra opened 4 years ago

shashi-netra commented 4 years ago

Prerequisites x-posted to bert-serving repo, as well, with apologies, as I, am not sure which is the culprit.

Please fill in by replacing [ ] with [x].

System information Ubuntu 18

Important: I am running BERT serving with a uncased_L-12_H-768_A-12 and it works just fine.

Some of this information can be collected via this script.


Description

Please replace YOUR_SERVER_ARGS and YOUR_CLIENT_ARGS accordingly. You can also write your own description for reproducing the issue.

I'm using this command to start the server:

bert-serving-start -model_dir ./multi_cased_L-12_H-768_A-12/ -num_worker 4

The multilingual modelI used is available here.

Then this issue shows up:

me@devbox:~/BERT_Multi$ bert-serving-start -model_dir ./multi_cased_L-12_H-768_A-12/ -num_worker 4
/usr/local/lib/python3.6/dist-packages/bert_serving/server/helper.py:176: UserWarning: Tensorflow 2.2.0 is not tested! It may or may not work. Feel free to submit an issue at https://github.com/hanxiao/bert-as-service/issues/
  'Feel free to submit an issue at https://github.com/hanxiao/bert-as-service/issues/' % tf.__version__)
usage: /usr/local/bin/bert-serving-start -model_dir ./multi_cased_L-12_H-768_A-12/ -num_worker 4
                 ARG   VALUE
__________________________________________________
           ckpt_name = bert_model.ckpt
         config_name = bert_config.json
                cors = *
                 cpu = False
          device_map = []
       do_lower_case = True
  fixed_embed_length = False
                fp16 = False
 gpu_memory_fraction = 0.5
       graph_tmp_dir = None
    http_max_connect = 10
           http_port = None
        mask_cls_sep = False
      max_batch_size = 256
         max_seq_len = 25
           model_dir = ./multi_cased_L-12_H-768_A-12/
no_position_embeddings = False
    no_special_token = False
          num_worker = 4
       pooling_layer = [-2]
    pooling_strategy = REDUCE_MEAN
                port = 5555
            port_out = 5556
       prefetch_size = 10
 priority_batch_size = 16
show_tokens_to_client = False
     tuned_model_dir = None
             verbose = False
                 xla = False

I:VENTILATOR:[__i:__i: 67]:freeze, optimize and export graph, could take a while...
E:GRAPHOPT:[gra:opt:154]:fail to optimize the graph!
Traceback (most recent call last):
  File "/usr/local/lib/python3.6/dist-packages/bert_serving/server/graph.py", line 42, in optimize_graph
    tf = import_tf(verbose=args.verbose)
  File "/usr/local/lib/python3.6/dist-packages/bert_serving/server/helper.py", line 186, in import_tf
    tf.logging.set_verbosity(tf.logging.DEBUG if verbose else tf.logging.ERROR)
AttributeError: module 'tensorflow' has no attribute 'logging'
Traceback (most recent call last):
  File "/usr/local/bin/bert-serving-start", line 8, in <module>
    sys.exit(main())
  File "/usr/local/lib/python3.6/dist-packages/bert_serving/server/cli/__init__.py", line 4, in main
    with BertServer(get_run_args()) as server:
  File "/usr/local/lib/python3.6/dist-packages/bert_serving/server/__init__.py", line 71, in __init__
    self.graph_path, self.bert_config = pool.apply(optimize_graph, (self.args,))
TypeError: 'NoneType' object is not iterable

...

s9k96 commented 4 years ago

Getting the same on ubuntu 18.04 with tensorflow 2.2.0. I can see this was fixed in #549 but helper.py still has tf.logging.

itrare commented 4 years ago

`PS C:\Users\hp\Desktop\bank_chatbot2\bank_chatbot> bert-serving-start -model_dir=C:/Users/hp/Music/Downloads/uncased_L-12_H-768_A-12/uncased_L-12_H-768_A-12 -num_worker=1 2020-07-26 15:24:40.690930: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'cudart64_101.dll'; dlerror: cudart64_101.dll not found 2020-07-26 15:24:40.691426: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine. c:\users\hp\anaconda3\lib\site-packages\bert_serving\server\helper.py:176: UserWarning: Tensorflow 2.1.1 is not tested! It may or may not work. Feel free to submit an issue at https://github.com/hanxiao/bert-as-service/issues/ 'Feel free to submit an issue at https://github.com/hanxiao/bert-as-service/issues/' % tf.version) usage: C:\Users\hp\Anaconda3\Scripts\bert-serving-start -model_dir=C:/Users/hp/Music/Downloads/uncased_L-12_H-768_A-12/uncased_L-12_H-768_A-12 -num_worker=1 ARG VALUE


       ckpt_name = bert_model.ckpt
     config_name = bert_config.json
            cors = *
             cpu = False
      device_map = []
   do_lower_case = True

fixed_embed_length = False fp16 = False gpu_memory_fraction = 0.5 graph_tmp_dir = None http_max_connect = 10 http_port = None mask_cls_sep = False max_batch_size = 256 max_seq_len = 25 model_dir = C:/Users/hp/Music/Downloads/uncased_L-12_H-768_A-12/uncased_L-12_H-768_A-12 no_position_embeddings = False no_special_token = False num_worker = 1 pooling_layer = [-2] pooling_strategy = REDUCE_MEAN port = 5555 port_out = 5556 prefetch_size = 10 priority_batch_size = 16 show_tokens_to_client = False tuned_model_dir = None verbose = False xla = False

I:VENTILATOR:freeze, optimize and export graph, could take a while... 2020-07-26 15:24:47.307277: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'cudart64_101.dll'; dlerror: cudart64_101.dll not found 2020-07-26 15:24:47.307992: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine. c:\users\hp\anaconda3\lib\site-packages\bert_serving\server\helper.py:176: UserWarning: Tensorflow 2.1.1 is not tested! It may or may not work. Feel free to submit an issue at https://github.com/hanxiao/bert-as-service/issues/ 'Feel free to submit an issue at https://github.com/hanxiao/bert-as-service/issues/' % tf.version) E:GRAPHOPT:fail to optimize the graph! Traceback (most recent call last): File "c:\users\hp\anaconda3\lib\runpy.py", line 193, in _run_module_as_main "main", mod_spec) File "c:\users\hp\anaconda3\lib\runpy.py", line 85, in _run_code exec(code, run_globals) File "C:\Users\hp\Anaconda3\Scripts\bert-serving-start.exe__main.py", line 7, in File "c:\users\hp\anaconda3\lib\site-packages\bert_serving\server\cli__init__.py", line 4, in main with BertServer(get_run_args()) as server: File "c:\users\hp\anaconda3\lib\site-packages\bert_serving\server\init.py", line 71, in init__ self.graph_path, self.bert_config = pool.apply(optimize_graph, (self.args,)) TypeError: cannot unpack non-iterable NoneType object PS C:\Users\hp\Desktop\bank_chatbot2\bank_chatbot> `

@hanxiao Hey can anyone help me with this... Like I have also given the absolute path but still the error occurrs

itrare commented 4 years ago

@hanxiao it got resolved when I used the TF v1.15, thanks for your awesome job. But Can expect TF version above 2.0.0 ?