macanv / BERT-BiLSTM-CRF-NER

Tensorflow solution of NER task Using BiLSTM-CRF model with Google BERT Fine-tuning And private Server services
https://github.com/macanv/BERT-BiLSMT-CRF-NER
4.67k stars 1.25k forks source link

运行run.py 但是启动的时候一只没有出现ready and listening #371

Closed ACRONYMxFYQ closed 3 years ago

ACRONYMxFYQ commented 3 years ago

Connected to pydev debugger (build 201.8538.36) usage: D:/Programing/ProgramData/PycharmProjects/BERT-BiLSTM-CRF-NER-master/BERT-BiLSTM-CRF-NER-master/run.py -bert_model_dir=D:\Programing\ProgramData\PycharmProjects\BERT-BiLSTM-CRF-NER-master\BERT-BiLSTM-CRF-NER-master\chinese_L-12_H-768_A-12 -model_pb_dir=D:\Programing\ProgramData\PycharmProjects\BERT-BiLSTM-CRF-NER-master\BERT-BiLSTM-CRF-NER-master\model -mode=NER -model_dir=D:\Programing\ProgramData\PycharmProjects\BERT-BiLSTM-CRF-NER-master\BERT-BiLSTM-CRF-NER-master\model -max_seq_len=510 -cpu ARG VALUE


  bert_model_dir = D:\Programing\ProgramData\PycharmProjects\BERT-BiLSTM-CRF-NER-master\BERT-BiLSTM-CRF-NER-master\chinese_L-12_H-768_A-12
       ckpt_name = bert_model.ckpt
     config_name = bert_config.json
            cors = *
             cpu = True
      device_map = []
            fp16 = False

gpu_memory_fraction = 0.5 http_max_connect = 10 http_port = None lstm_size = 128 mask_cls_sep = False max_batch_size = 1024 max_seq_len = 510 mode = NER model_dir = D:\Programing\ProgramData\PycharmProjects\BERT-BiLSTM-CRF-NER-master\BERT-BiLSTM-CRF-NER-master\model model_pb_dir = D:\Programing\ProgramData\PycharmProjects\BERT-BiLSTM-CRF-NER-master\BERT-BiLSTM-CRF-NER-master\model num_worker = 1 pooling_layer = [-2] pooling_strategy = REDUCE_MEAN port = 5555 port_out = 5556 prefetch_size = 10 priority_batch_size = 16 tuned_model_dir = None verbose = False xla = False Namespace(bert_model_dir='D:\Programing\ProgramData\PycharmProjects\BERT-BiLSTM-CRF-NER-master\BERT-BiLSTM-CRF-NER-master\chinese_L-12_H-768_A-12', ckpt_name='bert_model.ckpt', config_name='bert_config.json', cors='*', cpu=True, device_map=[], fp16=False, gpu_memory_fraction=0.5, http_max_connect=10, http_port=None, lstm_size=128, mask_cls_sep=False, max_batch_size=1024, max_seq_len=510, mode='NER', model_dir='D:\Programing\ProgramData\PycharmProjects\BERT-BiLSTM-CRF-NER-master\BERT-BiLSTM-CRF-NER-master\model', model_pb_dir='D:\Programing\ProgramData\PycharmProjects\BERT-BiLSTM-CRF-NER-master\BERT-BiLSTM-CRF-NER-master\model', num_worker=1, pooling_layer=[-2], pooling_strategy=<PoolingStrategy.REDUCE_MEAN: 2>, port=5555, port_out=5556, prefetch_size=10, priority_batch_size=16, tuned_model_dir=None, verbose=False, xla=False) I:VENTILATOR:[i:i: 91]:lodding ner model, could take a while... pb_file exits D:\Programing\ProgramData\PycharmProjects\BERT-BiLSTM-CRF-NER-master\BERT-BiLSTM-CRF-NER-master\model\ner_model.pb I:VENTILATOR:[i:i:100]:optimized graph is stored at: D:\Programing\ProgramData\PycharmProjects\BERT-BiLSTM-CRF-NER-master\BERT-BiLSTM-CRF-NER-master\model\ner_model.pb I:VENTILATOR:[i:_ru:148]:bind all sockets I:VENTILATOR:[__i:_ru:153]:open 8 ventilator-worker sockets, tcp://127.0.0.1:51535,tcp://127.0.0.1:51536,tcp://127.0.0.1:51537,tcp://127.0.0.1:51538,tcp://127.0.0.1:51539,tcp://127.0.0.1:51540,tcp://127.0.0.1:51541,tcp://127.0.0.1:51542 I:VENTILATOR:[i:_ru:157]:start the sink I:SINK:[i:_ru:317]:ready I:VENTILATOR:[__i:_ge:239]:get devices I:VENTILATOR:[i:_ge:271]:device map: worker 0 -> cpu I:WORKER-0:[__i:_ru:500]:use device cpu, load graph from D:\Programing\ProgramData\PycharmProjects\BERT-BiLSTM-CRF-NER-master\BERT-BiLSTM-CRF-NER-master\model\ner_model.pb WARNING:tensorflow:From D:\Programing\ProgramData\PycharmProjects\BERT-BiLSTM-CRF-NER-master\BERT-BiLSTM-CRF-NER-master\bert_base\server\helper.py:162: The name tf.logging.set_verbosity is deprecated. Please use tf.compat.v1.logging.set_verbosity instead. WARNING:tensorflow:From D:\Programing\ProgramData\PycharmProjects\BERT-BiLSTM-CRF-NER-master\BERT-BiLSTM-CRF-NER-master\bert_base\server\helper.py:162: The name tf.logging.ERROR is deprecated. Please use tf.compat.v1.logging.ERROR instead.

导致我接下来运行client_test进行预测的时候 一致没有反应