omoindrot / tensorflow-triplet-loss

Implementation of triplet loss in TensorFlow
https://omoindrot.github.io/triplet-loss
MIT License
1.12k stars 284 forks source link

The following error occurred when I ran train.py #19

Closed wisewolf7 closed 5 years ago

wisewolf7 commented 5 years ago

TypeError: Expected binary or unicode string, got <PrefetchDataset shapes: ((?, 784), (?,)), types: (tf.float32, tf.int32)>

omoindrot commented 5 years ago

Maybe check your version of TensorFlow? Otherwise can you post the full trace of the error.

alexandreduhamel commented 5 years ago

Hi, from clean AWS Deep Learning AMI (tf version 1.4.1) and without specify data directory (default)

Traceback (most recent call last): File "train.py", line 40, in estimator.train(lambda: train_input_fn(args.data_dir, params)) File "/home/ec2-user/anaconda3/envs/tensorflow_p27/lib/python2.7/site-packages/tensorflow/python/estimator/estimator.py", line 302, in train loss = self._train_model(input_fn, hooks, saving_listeners) File "/home/ec2-user/anaconda3/envs/tensorflow_p27/lib/python2.7/site-packages/tensorflow/python/estimator/estimator.py", line 711, in _train_model features, labels, model_fn_lib.ModeKeys.TRAIN, self.config) File "/home/ec2-user/anaconda3/envs/tensorflow_p27/lib/python2.7/site-packages/tensorflow/python/estimator/estimator.py", line 694, in _call_model_fn model_fn_results = self._model_fn(features=features, **kwargs) File "/home/ec2-user/work/tensorflow-triplet-loss/model/model_fn.py", line 59, in model_fn images = tf.reshape(images, [-1, params.image_size, params.image_size, 1]) File "/home/ec2-user/anaconda3/envs/tensorflow_p27/lib/python2.7/site-packages/tensorflow/python/ops/gen_array_ops.py", line 3938, in reshape "Reshape", tensor=tensor, shape=shape, name=name) File "/home/ec2-user/anaconda3/envs/tensorflow_p27/lib/python2.7/site-packages/tensorflow/python/framework/op_def_library.py", line 513, in _apply_op_helper raise err TypeError: Failed to convert object of type <class 'tensorflow.python.data.ops.dataset_ops.PrefetchDataset'> to Tensor. Contents: <PrefetchDataset shapes: ((?, 784), (?,)), types: (tf.float32, tf.int32)>. Consider casting elements to a supported type.

omoindrot commented 5 years ago

You need version 1.6 at least because the train method of an estimator can accept a tf.data.Dataset as input only since v1.6.

alexandreduhamel commented 5 years ago

On version 1.11.0, it works, thanks !

SaintLogos1234 commented 5 years ago

I meet some problem, but I don't upgrade my tensorflow, because My cuda is version8,what can I do ?

omoindrot commented 5 years ago

Just return tensors instead of just a dataset in the input functions:

def train_input_fn(data_dir, params):
    """Train input function for the MNIST dataset.
    Args:
        data_dir: (string) path to the data directory
        params: (Params) contains hyperparameters of the model (ex: `params.num_epochs`)
    """
    dataset = ...
    dataset = dataset.prefetch(1)  # make sure you always have one batch ready to serve

    iterator = dataset.make_one_shot_iterator()
    features, labels = iterator.get_next()
    return features, labels
SaintLogos1234 commented 5 years ago

This problem has been solved. Thank you very much

SaintLogos1234 commented 5 years ago

I run pytest, following error occurred: ============================= test session starts ============================== platform linux -- Python 3.6.5, pytest-3.5.1, py-1.5.3, pluggy-0.6.0 rootdir: /home/xieyangyang/Downloads/tensorflow-triplet-loss-master, inifile: plugins: remotedata-0.2.1, openfiles-0.3.0, doctestplus-0.1.3, arraydiff-0.2 collected 0 items / 1 errors

==================================== ERRORS ==================================== __ ERROR collecting model/tests/test_triplet_loss.py ___ ImportError while importing test module '/home/xieyangyang/Downloads/tensorflow-triplet-loss-master/model/tests/test_triplet_loss.py'. Hint: make sure your test modules/packages have valid Python names. Traceback: ../../anaconda3/lib/python3.6/site-packages/tensorflow/python/pywrap_tensorflow.py:58: in from tensorflow.python.pywrap_tensorflow_internal import * ../../anaconda3/lib/python3.6/site-packages/tensorflow/python/pywrap_tensorflow_internal.py:28: in _pywrap_tensorflow_internal = swig_import_helper() ../../anaconda3/lib/python3.6/site-packages/tensorflow/python/pywrap_tensorflow_internal.py:24: in swig_import_helper _mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description) ../../anaconda3/lib/python3.6/imp.py:243: in load_module return load_dynamic(name, filename, file) ../../anaconda3/lib/python3.6/imp.py:343: in load_dynamic return _load(spec) E ImportError: libcublas.so.9.0: cannot open shared object file: No such file or directory

During handling of the above exception, another exception occurred: model/tests/test_triplet_loss.py:4: in import tensorflow as tf ../../anaconda3/lib/python3.6/site-packages/tensorflow/init.py:24: in from tensorflow.python import pywrap_tensorflow # pylint: disable=unused-import ../../anaconda3/lib/python3.6/site-packages/tensorflow/python/init.py:49: in from tensorflow.python import pywrap_tensorflow ../../anaconda3/lib/python3.6/site-packages/tensorflow/python/pywrap_tensorflow.py:74: in raise ImportError(msg) E ImportError: Traceback (most recent call last): E File "/home/xieyangyang/anaconda3/lib/python3.6/site-packages/tensorflow/python/pywrap_tensorflow.py", line 58, in E from tensorflow.python.pywrap_tensorflow_internal import * E File "/home/xieyangyang/anaconda3/lib/python3.6/site-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 28, in E _pywrap_tensorflow_internal = swig_import_helper() E File "/home/xieyangyang/anaconda3/lib/python3.6/site-packages/tensorflow/python/pywrap_tensorflow_internal.py", line 24, in swig_import_helper E _mod = imp.load_module('_pywrap_tensorflow_internal', fp, pathname, description) E File "/home/xieyangyang/anaconda3/lib/python3.6/imp.py", line 243, in load_module E return load_dynamic(name, filename, file) E File "/home/xieyangyang/anaconda3/lib/python3.6/imp.py", line 343, in load_dynamic E return _load(spec) E ImportError: libcublas.so.9.0: cannot open shared object file: No such file or directory E
E
E Failed to load the native TensorFlow runtime. E
E See https://www.tensorflow.org/install/errors E
E for some common reasons and solutions. Include the entire stack trace E above this error message when asking for help. !!!!!!!!!!!!!!!!!!! Interrupted: 1 errors during collection !!!!!!!!!!!!!!!!!!!! =========================== 1 error in 0.14 seconds =========================== how to solve

omoindrot commented 5 years ago

Looks like an error from your TensorFlow installation.

The error you have looks like the one in this GitHub issue.

SaintLogos1234 commented 5 years ago

It is not a problem with my tensorflow installation, it is just because my version of tensorflow is 1.4.0, but the minimum requirement for this code version of tensorflow is 1.6.0

omoindrot commented 5 years ago

Great I'm closing the issue then.