guillaume-chevalier / LSTM-Human-Activity-Recognition

Human Activity Recognition example using TensorFlow on smartphone sensors dataset and an LSTM RNN. Classifying the type of movement amongst six activity categories - Guillaume Chevalier
MIT License
3.35k stars 938 forks source link

Saving & Loading the model #15

Open kiritbasu opened 6 years ago

kiritbasu commented 6 years ago

Do you have any examples of Saving the model and Loading it back up to run a prediction?

shoaib77 commented 6 years ago

I have a same problem regarding Saving and loading this model.

guillaume-chevalier commented 6 years ago

There's a lot of questions online about this, for example, the following link might help you, to sum up, you just need to save the TensorFlow graph to disks and reload it, and you can reload it in Python or here in C++ for example: https://stackoverflow.com/questions/35508866/tensorflow-different-ways-to-export-and-run-graph-in-c I didn't read it, but it seems quite documented, and it happened to me already to save models on disks in the past.

For now, I don't have the time to implement this. At least, let me give you a hint: you should give a name to the input placeholders and the variables you want to get. For example, in this code:

# Graph input/output
x = tf.placeholder(tf.float32, [None, n_steps, n_input])
y = tf.placeholder(tf.float32, [None, n_classes])

We'll need to the name the params if I'm not wrong:

# Graph input/output
x = tf.placeholder(tf.float32, [None, n_steps, n_input], name="x")
y = tf.placeholder(tf.float32, [None, n_classes], name="y")

Then in the sess.run, instead of referencing the variables, you reference a string formatted in a certain way which refers to the variable name. From what I recall, this string should be like "x:0" to replace the python tensor reference variable, such as like this in Python:

one_hot_predictions, accuracy, final_loss = sess.run(
    [pred, accuracy, cost],
    feed_dict={
        "x:0": X_test,
        "y:0": one_hot(y_test)
    }
)

So once you load back the model you'll need to use the string as you don't have the named Python variable holding the placeholder anymore. Hope this helps.

jaemin93 commented 5 years ago

I have same problem. I saved ckpt and load ckpt to inference. but its result is very different [train->prediction]. i check test data and find what is problem. INPUT_SIGNAL_TYPES is python set data type. that is problem when you distinct train, test code. you change INPUT_SIGNAL_TYPES data type to ordered data type. I'm not familiar English, but I hope it helps.

guillaume-chevalier commented 5 years ago

This issue will be fixed by PR #32.

arvindchandel commented 3 years ago

There's a lot of questions online about this, for example, the following link might help you, to sum up, you just need to save the TensorFlow graph to disks and reload it, and you can reload it in Python or here in C++ for example: https://stackoverflow.com/questions/35508866/tensorflow-different-ways-to-export-and-run-graph-in-c I didn't read it, but it seems quite documented, and it happened to me already to save models on disks in the past.

For now, I don't have the time to implement this. At least, let me give you a hint: you should give a name to the input placeholders and the variables you want to get. For example, in this code:

# Graph input/output
x = tf.placeholder(tf.float32, [None, n_steps, n_input])
y = tf.placeholder(tf.float32, [None, n_classes])

We'll need to the name the params if I'm not wrong:

# Graph input/output
x = tf.placeholder(tf.float32, [None, n_steps, n_input], name="x")
y = tf.placeholder(tf.float32, [None, n_classes], name="y")

Then in the sess.run, instead of referencing the variables, you reference a string formatted in a certain way which refers to the variable name. From what I recall, this string should be like "x:0" to replace the python tensor reference variable, such as like this in Python:

one_hot_predictions, accuracy, final_loss = sess.run(
    [pred, accuracy, cost],
    feed_dict={
        "x:0": X_test,
        "y:0": one_hot(y_test)
    }
)

So once you load back the model you'll need to use the string as you don't have the named Python variable holding the placeholder anymore. Hope this helps.

@guillaume-chevalier I tried this to run prediction on new sample X_val, its giving me error 'pred' not defined. I am running it like below: with tf.Session() as sess: saver = tf.train.import_meta_graph('/home/arvind/checkpoints/model.ckpt.meta') new=saver.restore(sess, tf.train.latest_checkpoint('/home/arvind/checkpoints/')) graph = tf.get_default_graph() input_x = graph.get_tensor_by_name("x:0") res = graph.get_tensor_by_name("y:0") feed_Dict = {input_x: X_val,} output = sess.run([pred], feed_dict=feed_Dict) print(output)

Error: pred not defined..

arvindchandel commented 3 years ago

Got it working.

zlg9folira commented 3 years ago

Got it working.

How did you get it working ? Could you add the missing part(s) for re-constructing pred ?

GUIMINLONG commented 1 year ago

I have a same problem regarding re-constructing pred

GUIMINLONG commented 1 year ago

one_hot_predictions, accuracy, final_loss = sess.run( [pred, accuracy, cost], feed_dict={ "x:0": X_test, "y:0": one_hot(y_test) } )

How you get it working?