kusterlab / prosit

Prosit offers high quality MS2 predicted spectra for any organism and protease as well as iRT prediction. When using Prosit is helpful for your research, please cite "Gessulat, Schmidt et al. 2019" DOI 10.1038/s41592-019-0426-7
https://www.proteomicsdb.org/prosit/
Apache License 2.0
85 stars 45 forks source link

Tensor("prediction_target:0", shape=(?, ?), dtype=float32) must be from the same graph as Tensor("prediction/BiasAdd:0", shape=(?, 1), dtype=float32) #7

Closed Zethson closed 5 years ago

Zethson commented 5 years ago

Hi,

very interested in Prosit. Hope that you can help me. Using arch:

zeth@master ~/P/prosit> pacman -Qs | grep nvidia
local/libnvidia-container-bin 1.0.2-1
local/libnvidia-container-tools-bin 1.0.2-1
local/nvidia 430.26-5
local/nvidia-container-runtime-bin 2.0.0+3.docker18.09.6-1
local/nvidia-container-runtime-hook-bin 1.4.0-1
local/nvidia-docker 2.0.3-4
local/nvidia-utils 430.26-1
local/opencl-nvidia 430.26-1
zeth@master ~/P/prosit> pacman -Qs | grep docker
local/docker 1:18.09.6-1
local/docker-compose 1.24.0-1
local/nvidia-container-runtime-bin 2.0.0+3.docker18.09.6-1
local/nvidia-docker 2.0.3-4
local/python-docker 4.0.2-1
local/python-docker-pycreds 0.4.0-1
    Python bindings for the docker credentials store API
local/python-dockerpty 0.4.1-4
    Python library to use the pseudo-tty of a docker container

I freshly cloned prosit and ran

zeth@master ~/P/prosit> make server MODEL=/root/model

Under /root/model is your shared model from figshare.

When curling to the server:

zeth@master ~/P/prosit> curl -F "peptides=@examples/peptidelist.csv" http://127.0.0.1:5000/predict/

I get an internal server error of:

[2019-06-22 15:53:24,901] ERROR in app: Exception on /predict/ [POST]
Traceback (most recent call last):
  File "/usr/local/lib/python3.5/dist-packages/flask/app.py", line 2311, in wsgi_app
    response = self.full_dispatch_request()
  File "/usr/local/lib/python3.5/dist-packages/flask/app.py", line 1834, in full_dispatch_request
    rv = self.handle_user_exception(e)
  File "/usr/local/lib/python3.5/dist-packages/flask/app.py", line 1737, in handle_user_exception
    reraise(exc_type, exc_value, tb)
  File "/usr/local/lib/python3.5/dist-packages/flask/_compat.py", line 36, in reraise
    raise value
  File "/usr/local/lib/python3.5/dist-packages/flask/app.py", line 1832, in full_dispatch_request
    rv = self.dispatch_request()
  File "/usr/local/lib/python3.5/dist-packages/flask/app.py", line 1818, in dispatch_request
    return self.view_functions[rule.endpoint](**req.view_args)
  File "/root/prosit/server.py", line 28, in predict
    result = prediction.predict(tensor, model, model_config)
  File "/root/prosit/prediction.py", line 14, in predict
    model.compile(optimizer="adam", loss="mse")
  File "/usr/local/lib/python3.5/dist-packages/keras/engine/training.py", line 333, in compile
    sample_weight, mask)
  File "/usr/local/lib/python3.5/dist-packages/keras/engine/training_utils.py", line 403, in weighted
    score_array = fn(y_true, y_pred)
  File "/usr/local/lib/python3.5/dist-packages/keras/losses.py", line 14, in mean_squared_error
    return K.mean(K.square(y_pred - y_true), axis=-1)
  File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/ops/math_ops.py", line 848, in binary_op_wrapper
    with ops.name_scope(None, op_name, [x, y]) as name:
  File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/ops.py", line 5770, in __enter__
    g = _get_graph_from_inputs(self._values)
  File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/ops.py", line 5428, in _get_graph_from_inputs
    _assert_same_graph(original_graph_element, graph_element)
  File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/framework/ops.py", line 5364, in _assert_same_graph
    original_item))
ValueError: Tensor("prediction_target:0", shape=(?, ?), dtype=float32) must be from the same graph as Tensor("prediction/BiasAdd:0", shape=(?, 1), dtype=float32).

Do you have any idea?

Thanks!

Zethson commented 5 years ago

Since multiple people seem to have issues it would be nice if you possibly could distribute a docker/singularity container/image that does not depend on cuda/nvidia-cuda and does predictions on the CPU? Maybe as an additional option?

gessulat commented 5 years ago

Please understand that we are currently very busy with extending Prosit into various directions and that is the reason we currently are only able to support Prosit with its current dependencies.

This error does not imply that the problem is CUDA related. Please very that your tensorflow / CUDA setup is working with make jump and then interactively with ipython. Also see #2 and #4 for specifics. Please let us know, when exactly the problem arises and what you can see in the interactive environment.

tobigithub commented 5 years ago

this is a prosit makefile or server script issue, The interactive script with "make jump" works with CUDA 10 even while its not supported. The current prosit server script and prosit server setup does not work with the same configuration and raises an error. So the error is not with CUDA but the prosit setups, configs and scripts.

gessulat commented 5 years ago

Please include specific error messages for us to understand the problem. Setup, configs and scripts are run regularly by several people within and outside of our lab and work for them. Without error messages it is hard to figure out how setup or usage differs.

gessulat commented 5 years ago

See #8