GaoQ1 / rasa_chatbot_cn

building a chinese dialogue system based on the newest version of rasa(基于最新版本rasa搭建的对话系统)
960 stars 290 forks source link

您好 我在使用make train 后出现报错 ,报错如下。还望解答。(已解决) #51

Closed lbwnh123 closed 5 years ago

lbwnh123 commented 5 years ago

make train
rasa train --domain domain.yml --data data --config config.yml --out models
2019-07-12 15:41:01 INFO rasa.model - Data (core-config) for Core model changed.
2019-07-12 15:41:01 INFO rasa.model - Data (nlu-config) for NLU model changed.
Training Core model...
2019-07-12 15:41:01 INFO root - Generating grammar tables from /usr/lib/python3.6/lib2to3/Grammar.txt
2019-07-12 15:41:01 INFO root - Generating grammar tables from /usr/lib/python3.6/lib2to3/PatternGrammar .txt
Using TensorFlow backend.
Processed Story Blocks: 100%|████████████████████████████████| 10/10 [00:00<00:00, 1451.12it/s, # trackers=1] Processed Story Blocks: 100%|█████████████████████████████████| 10/10 [00:00<00:00, 291.18it/s, # trackers=7] Processed Story Blocks: 100%|████████████████████████████████| 10/10 [00:00<00:00, 106.21it/s, # trackers=19] Processed Story Blocks: 100%|████████████████████████████████| 10/10 [00:00<00:00, 105.48it/s, # trackers=16] Processed trackers: 100%|███████████████████████████████████| 491/491 [00:05<00:00, 96.73it/s, # actions=164]


Layer (type) Output Shape Param # Connected to
==================================================================================================
input_1 (InputLayer) (None, 3, 49) 0


attention_1 (Attention) (None, 3, 1024) 150528 input_1[0][0]
input_1[0][0]
input_1[0][0]


attention_2 (Attention) (None, 3, 1024) 3145728 attention_1[0][0]
attention_1[0][0]
attention_1[0][0]


global_average_pooling1d_1 (Glo (None, 1024) 0 attention_2[0][0]


dropout_1 (Dropout) (None, 1024) 0 global_average_pooling1d_1[0][0]


dense_1 (Dense) (None, 19) 19475 dropout_1[0][0]

Total params: 3,315,731 Trainable params: 3,315,731 Non-trainable params: 0


2019-07-12 15:41:09 INFO rasa.core.policies.keras_policy - Fitting model with 164 total samples and a va lidation split of 0.1 Epoch 1/100 164/164 [==============================] - 1s 8ms/step - loss: 2.4029 - acc: 0.3841 Epoch 2/100 164/164 [==============================] - 1s 4ms/step - loss: 1.4817 - acc: 0.6159 Epoch 3/100 164/164 [==============================] - 1s 4ms/step - loss: 1.0786 - acc: 0.6341 Epoch 4/100 164/164 [==============================] - 1s 3ms/step - loss: 0.7755 - acc: 0.7683 Epoch 5/100 164/164 [==============================] - 1s 4ms/step - loss: 0.5705 - acc: 0.8110 Epoch 6/100 164/164 [==============================] - 1s 4ms/step - loss: 0.4409 - acc: 0.8598 Epoch 7/100 164/164 [==============================] - 1s 3ms/step - loss: 0.3448 - acc: 0.9451 Epoch 8/100 164/164 [==============================] - 1s 3ms/step - loss: 0.2685 - acc: 0.9634 Epoch 9/100 164/164 [==============================] - 1s 3ms/step - loss: 0.1996 - acc: 0.9817 Epoch 10/100 164/164 [==============================] - 1s 4ms/step - loss: 0.1648 - acc: 0.9695 Epoch 11/100 164/164 [==============================] - 1s 4ms/step - loss: 0.1443 - acc: 0.9939 Epoch 12/100 164/164 [==============================] - 1s 3ms/step - loss: 0.1191 - acc: 1.0000 Epoch 13/100 164/164 [==============================] - 1s 3ms/step - loss: 0.1039 - acc: 0.9939 Epoch 14/100 164/164 [==============================] - 1s 4ms/step - loss: 0.0801 - acc: 0.9939 Epoch 15/100 164/164 [==============================] - 1s 3ms/step - loss: 0.0704 - acc: 1.0000 Epoch 16/100 164/164 [==============================] - 1s 4ms/step - loss: 0.0547 - acc: 1.0000 Epoch 17/100 164/164 [==============================] - 1s 4ms/step - loss: 0.0475 - acc: 1.0000 Epoch 18/100 164/164 [==============================] - 1s 4ms/step - loss: 0.0338 - acc: 1.0000 Epoch 19/100 164/164 [==============================] - 1s 4ms/step - loss: 0.0292 - acc: 1.0000 Epoch 20/100 164/164 [==============================] - 1s 4ms/step - loss: 0.0262 - acc: 1.0000 Epoch 21/100 164/164 [==============================] - 1s 4ms/step - loss: 0.0235 - acc: 1.0000 Epoch 22/100 164/164 [==============================] - 1s 4ms/step - loss: 0.0206 - acc: 1.0000 Epoch 23/100 164/164 [==============================] - 1s 3ms/step - loss: 0.0205 - acc: 1.0000 Epoch 24/100 164/164 [==============================] - 1s 3ms/step - loss: 0.0179 - acc: 1.0000 Epoch 25/100 164/164 [==============================] - 1s 4ms/step - loss: 0.0185 - acc: 1.0000 Epoch 26/100 164/164 [==============================] - 1s 4ms/step - loss: 0.0148 - acc: 1.0000 Epoch 27/100 164/164 [==============================] - 1s 4ms/step - loss: 0.0135 - acc: 1.0000 Epoch 28/100 164/164 [==============================] - 1s 4ms/step - loss: 0.0124 - acc: 1.0000 Epoch 29/100 164/164 [==============================] - 1s 4ms/step - loss: 0.0121 - acc: 1.0000 Epoch 30/100 164/164 [==============================] - 1s 3ms/step - loss: 0.0094 - acc: 1.0000 Epoch 31/100 164/164 [==============================] - 1s 4ms/step - loss: 0.0093 - acc: 1.0000 Epoch 32/100 164/164 [==============================] - 1s 4ms/step - loss: 0.0100 - acc: 1.0000 Epoch 33/100 164/164 [==============================] - 1s 4ms/step - loss: 0.0065 - acc: 1.0000 Epoch 34/100 164/164 [==============================] - 1s 4ms/step - loss: 0.0072 - acc: 1.0000 Epoch 35/100 164/164 [==============================] - 1s 4ms/step - loss: 0.0067 - acc: 1.0000 Epoch 36/100 164/164 [==============================] - 1s 3ms/step - loss: 0.0061 - acc: 1.0000 Epoch 37/100 164/164 [==============================] - 1s 3ms/step - loss: 0.0058 - acc: 1.0000 Epoch 38/100 164/164 [==============================] - 1s 4ms/step - loss: 0.0053 - acc: 1.0000 Epoch 39/100 164/164 [==============================] - 1s 3ms/step - loss: 0.0064 - acc: 1.0000 Epoch 40/100 164/164 [==============================] - 1s 4ms/step - loss: 0.0059 - acc: 1.0000 Epoch 41/100 164/164 [==============================] - 1s 3ms/step - loss: 0.0050 - acc: 1.0000 Epoch 42/100 164/164 [==============================] - 1s 4ms/step - loss: 0.0044 - acc: 1.0000 Epoch 43/100 164/164 [==============================] - 1s 3ms/step - loss: 0.0043 - acc: 1.0000 Epoch 44/100 164/164 [==============================] - 1s 4ms/step - loss: 0.0045 - acc: 1.0000 Epoch 45/100 164/164 [==============================] - 1s 3ms/step - loss: 0.0041 - acc: 1.0000 Epoch 46/100 164/164 [==============================] - 1s 4ms/step - loss: 0.0044 - acc: 1.0000 Epoch 47/100 164/164 [==============================] - 1s 3ms/step - loss: 0.0035 - acc: 1.0000 Epoch 48/100 164/164 [==============================] - 1s 4ms/step - loss: 0.0032 - acc: 1.0000 Epoch 49/100 164/164 [==============================] - 1s 3ms/step - loss: 0.0032 - acc: 1.0000 Epoch 50/100 164/164 [==============================] - 1s 4ms/step - loss: 0.0032 - acc: 1.0000 Epoch 51/100 164/164 [==============================] - 1s 4ms/step - loss: 0.0032 - acc: 1.0000 Epoch 52/100 164/164 [==============================] - 1s 3ms/step - loss: 0.0029 - acc: 1.0000 Epoch 53/100 164/164 [==============================] - 1s 3ms/step - loss: 0.0027 - acc: 1.0000 Epoch 54/100 164/164 [==============================] - 1s 3ms/step - loss: 0.0026 - acc: 1.0000 Epoch 55/100 164/164 [==============================] - 1s 4ms/step - loss: 0.0022 - acc: 1.0000 Epoch 56/100 164/164 [==============================] - 1s 3ms/step - loss: 0.0023 - acc: 1.0000 Epoch 57/100 164/164 [==============================] - 1s 4ms/step - loss: 0.0025 - acc: 1.0000 Epoch 58/100 164/164 [==============================] - 1s 4ms/step - loss: 0.0021 - acc: 1.0000 Epoch 59/100 164/164 [==============================] - 1s 4ms/step - loss: 0.0023 - acc: 1.0000 Epoch 60/100 164/164 [==============================] - 1s 3ms/step - loss: 0.0018 - acc: 1.0000 Epoch 61/100 164/164 [==============================] - 1s 4ms/step - loss: 0.0017 - acc: 1.0000 Epoch 62/100 164/164 [==============================] - 1s 4ms/step - loss: 0.0016 - acc: 1.0000 Epoch 63/100 164/164 [==============================] - 1s 4ms/step - loss: 0.0021 - acc: 1.0000 Epoch 64/100 164/164 [==============================] - 1s 3ms/step - loss: 0.0016 - acc: 1.0000 Epoch 65/100 164/164 [==============================] - 1s 4ms/step - loss: 0.0020 - acc: 1.0000 Epoch 66/100 164/164 [==============================] - 1s 3ms/step - loss: 0.0018 - acc: 1.0000 Epoch 67/100 164/164 [==============================] - 1s 3ms/step - loss: 0.0018 - acc: 1.0000 Epoch 68/100 164/164 [==============================] - 1s 3ms/step - loss: 0.0014 - acc: 1.0000 Epoch 69/100 164/164 [==============================] - 1s 3ms/step - loss: 0.0015 - acc: 1.0000 Epoch 70/100 164/164 [==============================] - 1s 3ms/step - loss: 0.0014 - acc: 1.0000 Epoch 71/100 164/164 [==============================] - 1s 3ms/step - loss: 0.0016 - acc: 1.0000 Epoch 72/100 164/164 [==============================] - 1s 4ms/step - loss: 0.0012 - acc: 1.0000 Epoch 73/100 164/164 [==============================] - 1s 4ms/step - loss: 0.0014 - acc: 1.0000 Epoch 74/100 164/164 [==============================] - 1s 4ms/step - loss: 0.0011 - acc: 1.0000 Epoch 75/100 164/164 [==============================] - 1s 3ms/step - loss: 0.0013 - acc: 1.0000 Epoch 76/100 164/164 [==============================] - 1s 4ms/step - loss: 0.0014 - acc: 1.0000 Epoch 77/100 164/164 [==============================] - 1s 3ms/step - loss: 0.0013 - acc: 1.0000 Epoch 78/100 164/164 [==============================] - 1s 4ms/step - loss: 0.0014 - acc: 1.0000 Epoch 79/100 164/164 [==============================] - 1s 3ms/step - loss: 0.0011 - acc: 1.0000 Epoch 80/100 164/164 [==============================] - 1s 3ms/step - loss: 0.0010 - acc: 1.0000 Epoch 81/100 164/164 [==============================] - 1s 4ms/step - loss: 0.0012 - acc: 1.0000 Epoch 82/100 164/164 [==============================] - 1s 3ms/step - loss: 0.0010 - acc: 1.0000 Epoch 83/100 164/164 [==============================] - 1s 4ms/step - loss: 0.0010 - acc: 1.0000 Epoch 84/100 164/164 [==============================] - 1s 4ms/step - loss: 0.0012 - acc: 1.0000 Epoch 85/100 164/164 [==============================] - 1s 4ms/step - loss: 0.0011 - acc: 1.0000 Epoch 86/100 164/164 [==============================] - 1s 4ms/step - loss: 8.4983e-04 - acc: 1.0000 Epoch 87/100 164/164 [==============================] - 1s 4ms/step - loss: 8.9001e-04 - acc: 1.0000 Epoch 88/100 164/164 [==============================] - 1s 4ms/step - loss: 9.1472e-04 - acc: 1.0000 Epoch 89/100 164/164 [==============================] - 1s 4ms/step - loss: 8.9015e-04 - acc: 1.0000 Epoch 90/100 164/164 [==============================] - 1s 4ms/step - loss: 8.5846e-04 - acc: 1.0000 Epoch 91/100 164/164 [==============================] - 1s 4ms/step - loss: 8.0720e-04 - acc: 1.0000 Epoch 92/100 164/164 [==============================] - 1s 4ms/step - loss: 7.1269e-04 - acc: 1.0000 Epoch 93/100 164/164 [==============================] - 1s 4ms/step - loss: 9.6782e-04 - acc: 1.0000 Epoch 94/100 164/164 [==============================] - 1s 4ms/step - loss: 8.5787e-04 - acc: 1.0000 Epoch 95/100 164/164 [==============================] - 1s 4ms/step - loss: 7.9524e-04 - acc: 1.0000 Epoch 96/100 164/164 [==============================] - 1s 4ms/step - loss: 7.3859e-04 - acc: 1.0000 Epoch 97/100 164/164 [==============================] - 1s 4ms/step - loss: 7.8808e-04 - acc: 1.0000 Epoch 98/100 164/164 [==============================] - 1s 4ms/step - loss: 6.7807e-04 - acc: 1.0000 Epoch 99/100 164/164 [==============================] - 1s 4ms/step - loss: 6.8284e-04 - acc: 1.0000 Epoch 100/100 164/164 [==============================] - 1s 4ms/step - loss: 6.6815e-04 - acc: 1.0000 2019-07-12 15:42:09 INFO rasa.core.policies.keras_policy - Done fitting keras policy model Processed trackers: 100%|████████████████████████████████████| 10/10 [00:00<00:00, 1067.77it/s, # actions=50] Processed actions: 50it [00:00, 6449.01it/s, # examples=50] 2019-07-12 15:42:10 INFO rasa.core.agent - Persisted model to '/tmp/tmpfua68ef0/core' Core model training completed. Training NLU model... Traceback (most recent call last): File "/home/ubuntu/.local/lib/python3.6/site-packages/bert_serving/client/init.py", line 206, in arg_wr apper return func(self, *args, **kwargs) File "/home/ubuntu/.local/lib/python3.6/site-packages/bert_serving/client/init.py", line 232, in server _status return jsonapi.loads(self._recv(req_id).content[1]) File "/home/ubuntu/.local/lib/python3.6/site-packages/bert_serving/client/init.py", line 164, in _recv raise e File "/home/ubuntu/.local/lib/python3.6/site-packages/bert_serving/client/init.py", line 153, in _recv response = self.receiver.recv_multipart() File "/home/ubuntu/.local/lib/python3.6/site-packages/zmq/sugar/socket.py", line 470, in recv_multipart parts = [self.recv(flags, copy=copy, track=track)] File "zmq/backend/cython/socket.pyx", line 796, in zmq.backend.cython.socket.Socket.recv File "zmq/backend/cython/socket.pyx", line 832, in zmq.backend.cython.socket.Socket.recv File "zmq/backend/cython/socket.pyx", line 191, in zmq.backend.cython.socket._recv_copy File "zmq/backend/cython/socket.pyx", line 186, in zmq.backend.cython.socket._recv_copy File "zmq/backend/cython/checkrc.pxd", line 19, in zmq.backend.cython.checkrc._check_rc zmq.error.Again: Resource temporarily unavailable

The above exception was the direct cause of the following exception:

Traceback (most recent call last): File "/home/ubuntu/.local/bin/rasa", line 11, in sys.exit(main()) File "/home/ubuntu/.local/lib/python3.6/site-packages/rasa/main.py", line 76, in main cmdline_arguments.func(cmdline_arguments) File "/home/ubuntu/.local/lib/python3.6/site-packages/rasa/cli/train.py", line 84, in train kwargs=extract_additional_arguments(args), File "/home/ubuntu/.local/lib/python3.6/site-packages/rasa/train.py", line 42, in train kwargs=kwargs, File "uvloop/loop.pyx", line 1451, in uvloop.loop.Loop.run_until_complete File "/home/ubuntu/.local/lib/python3.6/site-packages/rasa/train.py", line 100, in train_async kwargs, File "/home/ubuntu/.local/lib/python3.6/site-packages/rasa/train.py", line 203, in _train_async_internal kwargs=kwargs, File "/home/ubuntu/.local/lib/python3.6/site-packages/rasa/train.py", line 256, in _do_training fixed_model_name=fixed_model_name, File "/home/ubuntu/.local/lib/python3.6/site-packages/rasa/train.py", line 463, in _train_nlu_with_validate d_data config, nlu_data_directory, _train_path, fixed_model_name="nlu" File "/home/ubuntu/.local/lib/python3.6/site-packages/rasa/nlu/train.py", line 81, in train trainer = Trainer(nlu_config, component_builder) File "/home/ubuntu/.local/lib/python3.6/site-packages/rasa/nlu/model.py", line 151, in init self.pipeline = self._build_pipeline(cfg, component_builder) File "/home/ubuntu/.local/lib/python3.6/site-packages/rasa/nlu/model.py", line 163, in _build_pipeline component = component_builder.create_component(component_cfg, cfg) File "/home/ubuntu/.local/lib/python3.6/site-packages/rasa/nlu/components.py", line 459, in create_componen t component = registry.create_component_by_config(component_config, cfg) File "/home/ubuntu/.local/lib/python3.6/site-packages/rasa/nlu/registry.py", line 196, in createcomponent by_config return component_class.create(component_config, config) File "/home/ubuntu/.local/lib/python3.6/site-packages/rasa/nlu/components.py", line 244, in create return cls(component_config) File "/home/ubuntu/.local/lib/python3.6/site-packages/rasa_nlu_gao/featurizers/bert_vectors_featurizer.py", line 61, in init identity=identity File "/home/ubuntu/.local/lib/python3.6/site-packages/bert_serving/client/init.py", line 451, in init self.availablebc = [BertClient(**kwargs) for in range(max_concurrency)] File "/home/ubuntu/.local/lib/python3.6/site-packages/bert_serving/client/init.py", line 451, in <listc omp> self.availablebc = [BertClient(**kwargs) for in range(max_concurrency)] File "/home/ubuntu/.local/lib/python3.6/site-packages/bert_serving/client/init.py", line 108, in init s_status = self.server_status File "/home/ubuntu/.local/lib/python3.6/site-packages/bert_serving/client/init.py", line 215, in arg_wr apper _raise(t_e, _e) File "/home/ubuntu/.local/lib/python3.6/site-packages/bert_serving/client/_py3_var.py", line 9, in _raise raise t_e from _e TimeoutError: no response from the server (with "timeout"=10000 ms), please check the following:is the server still online? is the network broken? are "port" and "port_out" correct? are you encoding a huge amount of da ta whereas the timeout is too small for that? Makefile:2: recipe for target 'train' failed make: *** [train] Error 1

lbwnh123 commented 5 years ago

按照作者《rasa对话系统踩坑记(八)》,就可以解决。 https://www.jianshu.com/p/6a93209c48a4