Open abdou31 opened 5 years ago
@abdou31 Glad you have make it so far. The question you asked involves tflie and Android development that I'm not familiar with. I'm afraid you will have to ask people with experience of mobile app development. Good luck!
I know that information, I know that you are not familiar witv Android environment What I wrote is just an explication for what I whould like to do after testing image wirh python script. I know that you have landmark_video.py that can detect from webcam and from video, it work fine with me. My question is: Do you have a python script that can detect a custom image not a collection of successive images from my dataset ( landmark.py if predict="test")?
If you have exported your model in the saved model
format, then TensorFlow serving is recommended. And you can find a sample client demo here:
https://github.com/tensorflow/serving/blob/master/tensorflow_serving/example/resnet_client_grpc.py
I don't understand this point 'export model in the saved model'. What I have are two types of models:
I'm not sure. Maybe you can try this API for tflite model: https://www.tensorflow.org/api_docs/python/tf/lite/Interpreter
I think that you did not understand me. You make a landmark_video.py to test video and to show the difference between the model Resnet and dlib prediction. I tested this script to detect eye region for video it works fine. But for now, I need to test an image instead of a video. The other script landmark.py can be used to test prediction on the collection of images from a dataset but no for a custom image. I need to specify a path for the image in the code to test it. This code from landmark.py :
else:
predictions = estimator.predict(input_fn=_predict_input_fn)
for _, result in enumerate(predictions):
img = cv2.imread(result['name'].decode('ASCII') + '.jpg')
print(result['logits'])
marks = np.reshape(result['logits'], (-1, 2)) * IMG_WIDTH
for mark in marks:
cv2.circle(img, (int(mark[0]), int(
mark[1])), 1, (0, 255, 0), -1, cv2.LINE_AA)
img = cv2.resize(img, (512, 512))
cv2.imshow('result', img)
# output_node_names = [n.name for n in tf.get_default_graph().as_graph_def().node]
# print(output_node_names)
cv2.waitKey()
if __name__ == '__main__':
tf.app.run()
It seems that you are using the outdated file. In the latest code the prediction
part is removed.
I suggest using TensorFlow Serving for prediction. The latest code exports the model in the saved model
format, in which case the file could be used by TensorFlow Serving directly.
I'm afraid if I use update repo because I think that this can gives me new errors and some 9ther problems.
Learning new things is always painful, but it's worth it.
Yes, of course, but as you know that this project of eye region detection is related to a specific time limit. I'm a PhD student and I should finish this project on a specific date.
Hello Sir, I have added tflite model to Android Studio and I tried to get predictions(eye region ) from this picture:
I have used Ml kit and Tensorflow lite guide, I got some results with Toast ( message showed on my screen phone ). but I get only 47 values , and the logic says that I should get 80 values (40 for x, 40 for y ). Remember that in my logits layer I have set units to 80. I have tflite model and frozen inference graph .pb. I would like to test an image with my tflite model and get x and y coordinates of landmarks created on the eye region ( my case ) to compare the result with that I got on Android phone. Thanks