Closed Zumbalamambo closed 5 years ago
You'd need to run Openpose on your camera, with it outputting the pose keypoints of the current frame, then grab the latest keypoints (reformatted), append them to a rolling window (X_live) which you then feed into the model (with it's pretrained weights and biases), ie:
pred = LSTM_RNN(X_live, weights, biases)
or
pred_out = sess.run([pred], feed_dict={ x: X_live})
(if still in a tf.InteractiveSession)
I'm in the process of updating the repo, one addition will be a quick test on real data at the end, however it won't be linked directly to OpenPose as you will need to install this locally still
Great work! I also doing the same idea and comparing different networks (CNN, LSTM, so on). I am very interested in online classification. Do you know, roughly, how much time this LSTM network takes to classify an activity? (with your computer specs)
After training it only takes about 0.3ms per classification. Btw I'm using a single GTX1080Ti
Any chance you can share an example of saving the model, weights, etc and restoring that model to run inference? I'm struggling to get it to load the two LSTM cells.
Error:
FailedPreconditionError (see above for traceback): Attempting to use uninitialized value rnn/multi_rnn_cell/cell_0/basic_lstm_cell/kernel_1
When I print out the restored graph I see rnn/multi_rnn_cell/cell_0/basic_lstm_cell/kernel/*
. Does this have to do with a bug saving or restoring a multiple layer RNN?
Edit: I finally got it working last night. I needed to setup the model, placeholders and variables (weights & biases) then setup the session and restore variables.
Check out my comment in issue #5 :
You can save, and then restore the trained tensor flow model by creating a tf.train.Saver() instance, and using that to save your model checkpoint. See https://www.tensorflow.org/guide/saved_model for more details. Here's how I would do it:
#create saver before training
saver = tf.train.Saver()
retrain = False
#check if you want to retrain or import a saved model
if not retrain:
saver.restore(sess, "/dataset_path/model.ckpt")
print("Model restored.")
# code to run inference...
# Check if you want to save your current model
if retrain:
save_path = saver.save(sess, "/save_path/model.ckpt")
print("Model saved in file: %s" % save_path)
Running inference on your own output from openpose is straightforward if you don't need it realtime (ie running from a camera). It's simply a matter of converting the json output of openpose to the same format as the X input I used (see def load_X and the readme. NOTE: I think openpose format has changed sltighly since I lasted worked on this) then running the below after loading your trained model:
X_val = load_X(test_file)
preds = sess.run(
[pred],
feed_dict={
x: X_val
}
)
Working from a camera or video is much trickier. The way I have done it in the past is by using the openpose c++ example /openpose/examples/openpose/openpose.cpp (in the openpose package) which works out of the box on video or cameras, and grabbing the output from there, then using the ROS framework and streaming the output of openpose to a python script using the above trained and save model. Let me know how you go after giving it a go.
@sberryman Hello I am trying to save and restore the model too. And I met the error that you mentioned above too... I setup the session with init = tf.global_variables_initializer() and solved the problem. But When I test the model the accuracy is very low(around 0.2). My saved model is less than 2MB(after I run 300 epch, the train acc and test acc are more than 0.9 as described in the tutorial). I cannot tell what lead to the low accuracy. It is because of my model or the way I setup the session. Could you share your example with me plz?
It works now
You'd need to run Openpose on your camera, with it outputting the pose keypoints of the current frame, then grab the latest keypoints (reformatted), append them to a rolling window (X_live) which you then feed into the model (with it's pretrained weights and biases), ie:
pred = LSTM_RNN(X_live, weights, biases)
or
pred_out = sess.run([pred], feed_dict={ x: X_live})
(if still in a tf.InteractiveSession)I'm in the process of updating the repo, one addition will be a quick test on real data at the end, however it won't be linked directly to OpenPose as you will need to install this locally still
Can you list down the steps for integrating openpose with your repository. How we can dump data in Xlive variable. Or if openpose runs in standalone manner then how can we integrate the repo to pick the json data dumped by openpose and then feed to LSTM model.
@Zumbalamambo @sberryman @kli017, can you please share some code if you solved this issue? I am trying to do the same, but I am struggling to define the correct way to store/restore variables and placeholder
@caloc I never spent the time to get it working. Good luck!
May I know how I can do online inference from a real time camera??