Open demongolem opened 5 years ago
I label this as a bug because consider the two different behaviors for the document:
Liverpool is a very good football team. Arsenal is not though.
If I have the code application.run(port='8086', threaded=False)
in Entry.py, I get a numeric response back when I do a post request for document-based sentiment. When I turn that line into application.run(port='8086', threaded=True)
I get the following stack trace and a 500 result:
Traceback (most recent call last): File "Python36\lib\site-packages\flask\app.py", line 2292, in wsgi_app response = self.full_dispatch_request() File "Python36\lib\site-packages\flask\app.py", line 1815, in full_dispatch_request rv = self.handle_user_exception(e) File "Python36\lib\site-packages\flask\app.py", line 1718, in handle_user_exception reraise(exc_type, exc_value, tb) File "Python36\lib\site-packages\flask_compat.py", line 35, in reraise raise value File "Python36\lib\site-packages\flask\app.py", line 1813, in full_dispatch_request rv = self.dispatch_request() File "Python36\lib\site-packages\flask\app.py", line 1799, in dispatch_request return self.view_functionsrule.endpoint File "MultilevelSentiment\Entry.py", line 66, in get_spacy_sentiment return str(compute_spacy_sentiment(text)) File "MultilevelSentiment\Entry.py", line 70, in compute_spacy_sentiment return SpacySentiment.evaluate_without_labels(nlp, text) File "MultilevelSentiment\SpacySentiment.py", line 176, in evaluate_without_labels for doc in nlp.pipe(texts, batch_size=1000, n_threads=4): File "Python36\lib\site-packages\spacy\language.py", line 572, in pipe for doc in docs: File "MultilevelSentiment\SpacySentiment.py", line 73, in pipe ys = self._model.predict(Xs) File "Python36\lib\site-packages\keras\engine\training.py", line 1164, in predict self._make_predict_function() File "Python36\lib\site-packages\keras\engine\training.py", line 554, in _make_predict_function kwargs) File "Python36\lib\site-packages\keras\backend\tensorflow_backend.py", line 2744, in function return Function(inputs, outputs, updates=updates, kwargs) File "Python36\lib\site-packages\keras\backend\tensorflow_backend.py", line 2546, in init with tf.control_dependencies(self.outputs): File "Python36\lib\site-packages\tensorflow\python\framework\ops.py", line 5004, in control_dependencies return get_default_graph().control_dependencies(control_inputs) File "Python36\lib\site-packages\tensorflow\python\framework\ops.py", line 4543, in control_dependencies c = self.as_graph_element(c) File "Python36\lib\site-packages\tensorflow\python\framework\ops.py", line 3490, in as_graph_element return self._as_graph_element_locked(obj, allow_tensor, allow_operation) File "Python36\lib\site-packages\tensorflow\python\framework\ops.py", line 3569, in _as_graph_element_locked raise ValueError("Tensor %s is not an element of this graph." % obj) ValueError: Tensor Tensor("dense_2/Sigmoid:0", shape=(?, 1), dtype=float32) is not an element of this graph. INFO:werkzeug:127.0.0.1 - - [16/Jan/2019 13:14:44] "[1m[35mPOST /spacy HTTP/1.1[0m" 500 -
This is also spaCy posted and currently Open
Just to copy from the link. This will be fixed in the 2.1 release. The current official release (time of writing) is 2.0.18. There is a 2.1 pre-release candidate out there for anyone that wants to give that a go (I have not yet).
For linux users of this repository, the following could settings could provide a short-term fix until we go all in for 2.1.
export OMP_NUM_THREADS=1 export MKL_NUM_THREADS=1
Right now,
threaded=false
. If we were to change this, the application would not run correctly. Look into making the changes necessary so we can use multi-threading.This could just be an issue for the spacy annotator as other annotators do work in the multi-threaded setup.