Open besherh opened 5 years ago
./tools/conversion/export_quant_tflite_model.py
only generates one *.pb file, named "model_original.pb". It is as expected.Please download the model files from the following link: https://drive.google.com/file/d/1Nya13flNGOUiXgPhkH2yToIJTOC_D4y-/view?usp=sharing
Here are the steps to generate the files in the link above: 1- run this command: ./scripts/run_local.sh ./nets/resnet_at_cifar10_run.py --learner uniform-tf
2- use export utility python ./tools/conversion/export_quant_tflite_model.py --model_dir ./models_uqtf_eval
Finally, run the code that I posted earlier to test the accuracy of model_original.pb (you could find it in the downloaded files [models_uqtf_eval]) : The number of correct predictions is 2452 out of 10000
Thanks for detailed explanation. We will keep you informed if any potential bug is spotted.
Apart from that, could you verify the following:
1- models_uqtf_eval folder contains the *.ckpt files for the quantized model after testing. right? I need to test the accuracy for the quantized model using the ckpt files. However, I am not able to because when I am trying to load the graph from those files, I can not find the input/output tensors. I need to know the exact names for them to be able to run the session with feed_dic.
I tried to list all tensors like the following:
from tensorflow.python.tools.inspect_checkpoint import print_tensors_in_checkpoint_file
import os
model_dir = "./"
checkpoint_path = os.path.join(model_dir, "model.ckpt")
print_tensors_in_checkpoint_file(file_name=checkpoint_path,all_tensors=True, tensor_name='')
The result could be optined from downloading this file and you could notice that there is no input/output tensors: https://drive.google.com/file/d/1CiH_DP3tLXGrhJ4Yy0OR4hFNBWlBg646/view?usp=sharing
@besherh For your last comment, can you find any tensors in these two collections, defined here? https://github.com/Tencent/PocketFlow/blob/master/learners/uniform_quantization_tf/learner.py#L276
no, I could not find any of them in the tensors name list that I had from the *.ckpt files
I am facing the similar problem. The validation score close to zero if I using the frozen graph which converted from the ckpt. It is very tricky. Everything works well if I using the checkpoint's meta graph. I guess something wrong when freezing the graph or inference. so far, total lost there.
@yuanyuanli85 Even with checkpoint's metafiles (ckpt), accuracy is very low for me !
Maybe some recent updates (or even earlier) lead to this. We are looking into this now. @besherh @yuanyuanli85
@jiaxiang-wu thank you for quick response. For my issue, the root cause come from my code instead of pocketflow. I did not pass correct dataset iterator after loading the frozen graph. Sorry for this false alarm. Pocketflow is a wonderful tool!
@yuanyuanli85 Thanks for your response. We are planning to publish a benchmark tool for .pb and .tflite models, to test their classification accuracy and verify whether the model conversion is working as expected. This may be helpful for other users as well.
@jiaxiang-wu 💯 👍
@yuanyuanli85 Could you please share the code that solves your issue ? Thanks
@besherh The issue i met is not the same as yours. Please pay attention to the data preprocess pipeline. Please check the data preprocess in your code, and make sure it is matched with the parse_fn
defined in cifar10_dataset.py, such as transpose dims, subtraction with mean, divide by IMAGE_STD ... etc. Generally, make sure the data feeding into the graph is the same as what is feed to train the network.
def parse_fn(example_serialized, is_train):
# data parsing
record = tf.decode_raw(example_serialized, tf.uint8)
label = tf.slice(record, [0], [LABEL_BYTES])
label = tf.one_hot(tf.reshape(label, []), FLAGS.nb_classes)
image = tf.slice(record, [LABEL_BYTES], [IMAGE_BYTES])
image = tf.reshape(image, [IMAGE_CHN, IMAGE_HEI, IMAGE_WID])
image = tf.cast(tf.transpose(image, [1, 2, 0]), tf.float32)
image = (image - IMAGE_AVE) / IMAGE_STD
# data augmentation
if is_train:
image = tf.image.resize_image_with_crop_or_pad(image, IMAGE_HEI + 8, IMAGE_WID + 8)
image = tf.random_crop(image, [IMAGE_HEI, IMAGE_WID, IMAGE_CHN])
image = tf.image.random_flip_left_right(image)
return image, label
@yuanyuanli85 Thanks for the hint. I will try to work on that
Maybe some recent updates (or even earlier) lead to this. We are looking into this now. @besherh @yuanyuanli85
@jiaxiang-wu could you please update us about this ?
I am trying to calculate manually the accuracy of a model with uniform-tf learner. After calling export_quant_tflite_model, a Pb file was generated, python ./tools/conversion/export_quant_tflite_model.py --model_dir ./models_uqtf_eval
I am trying to test the accuracy of the original_model.pb.
the result is : Number of correct prediction 1063 out of 10000 Accuracy is 0.106
The accuracy is too low ! so I decided to run it again with a different model from a differen learner, I tried with channel pruning "trasformed_model.pb" but same issue (accuracy is too low ). why the accuracy has dropped after freezing the graph? are there any mistakes in my approach?
Another question 👍 After calling : python ./tools/conversion/export_quant_tflite_model.py --model_dir ./models_uqtf_eval
why there is no transformed_model.pb ? only model_quantzed.tflite is generated ?