Open vkaul11 opened 4 years ago
First thing that comes to mind is that you are using a rather "old style" of tensorflow: for example I don't think I've seen placeholder
or feed_dict
in years.
How are you training? Using the lbsvm generator? Why not use that same generator for the test data?
I was just trying to follow the example here https://github.com/tensorflow/ranking/blob/master/tensorflow_ranking/examples/tf_ranking_libsvm.py which still uses the placeholder and feed_dict. I am training like this and using same estimator to predict
features, labels = load_libsvm_data(FLAGS.dataset_base_path + '/train-' + FLAGS.locale + '.txt', 10)
train_input_fn, train_hook = get_train_inputs(features, labels, FLAGS.batch_size)
features_vali, labels_vali = load_libsvm_data( FLAGS.dataset_base_path + '/test-small-' + FLAGS.locale + '.txt', 10)
vali_input_fn, vali_hook = get_eval_inputs(features_vali, labels_vali)
optimizer = tf.compat.v1.train.AdagradOptimizer(learning_rate=FLAGS.learning_rate)
def _train_op_fn(loss):
"""Defines train op used in ranking head."""
update_ops = tf.compat.v1.get_collection(tf.compat.v1.GraphKeys.UPDATE_OPS)
minimize_op = optimizer.minimize(
loss=loss, global_step=tf.compat.v1.train.get_global_step())
train_op = tf.group([minimize_op, update_ops])
return train_op
ranking_head = tfr.head.create_ranking_head(
loss_fn=tfr.losses.make_loss_fn(tfr.losses.RankingLossKey.APPROX_NDCG_LOSS),
eval_metric_fns=eval_metric_fns(),
train_op_fn=_train_op_fn)
estimator = tf.estimator.Estimator(
model_fn=tfr.model.make_groupwise_ranking_fn(
group_score_fn=make_score_fn(),
group_size=FLAGS.group_size,
transform_fn=make_transform_fn(),
ranking_head=ranking_head),
config=tf.estimator.RunConfig(
FLAGS.output_dir, save_checkpoints_steps=1000))
train_spec = tf.estimator.TrainSpec(
input_fn=train_input_fn,
hooks=[train_hook],
max_steps=FLAGS.num_train_steps)
vali_spec = tf.estimator.EvalSpec(
name="eval",
input_fn=vali_input_fn,
hooks=[vali_hook],
#steps=10000,
throttle_secs=15)
what are the details of your scoring function?
By making this modification in prediction input function https://github.com/tensorflow/ranking/issues/186 I do get prediction scores. The problem though is that the first score is always 1. Is that by design ? pred_result = [array([1. , 0.63665974, 0.8288901 , 0.57784206, 0.58248824, 0.57637537, 0.18180293, 0.53710747, 0.65632606, 0.5432198 ,
@vkaul11: no, the first score is not supposed to be always 1.
@vkaul11 how to change input function to make it work for prediction, elaborate please?
While doing the ranking we use another model (lattice) as the make_score function but the problem is evaluation or prediction only gives us the list from the output. Is it possible to get the make_score function as output. I am using a single example in my test data but I get this error when I try to convert generator into list
ValueError: Cannot reshape a tensor with 25 elements to shape [1,1] (1 elements) for '{{node transform/encoding_layer/1/Reshape}} = Reshape[T=DT_FLOAT, Tshape=DT_INT32](IteratorGetNext, transform/encoding_layer/1/Reshape/shape)' with input shapes: [1,25,1], [2] and with input tensors computed as partial shapes: input[1] = [1,1].
For example I have: