Tencent / PocketFlow

An Automatic Model Compression (AutoMC) framework for developing smaller and faster AI applications.
https://pocketflow.github.io
Other
2.78k stars 490 forks source link

TOCO failed of channel pruning, MobileNetV2 at Cifar10 #144

Open jiaxiangxu opened 5 years ago

jiaxiangxu commented 5 years ago

log of running export_chn_pruned_tflite_model.py: 2018-12-12 05:19:28.614521: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 AVX512F FMA 2018-12-12 05:19:29.483043: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1432] Found device 0 with properties: name: GeForce GTX 1080 Ti major: 6 minor: 1 memoryClockRate(GHz): 1.683 pciBusID: 0000:02:00.0 totalMemory: 10.92GiB freeMemory: 10.76GiB 2018-12-12 05:19:29.622877: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1432] Found device 1 with properties: name: GeForce GTX 1080 Ti major: 6 minor: 1 memoryClockRate(GHz): 1.683 pciBusID: 0000:65:00.0 totalMemory: 10.91GiB freeMemory: 10.76GiB 2018-12-12 05:19:29.623812: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1511] Adding visible gpu devices: 0, 1 2018-12-12 05:19:30.190501: I tensorflow/core/common_runtime/gpu/gpu_device.cc:982] Device interconnect StreamExecutor with strength 1 edge matrix: 2018-12-12 05:19:30.190562: I tensorflow/core/common_runtime/gpu/gpu_device.cc:988] 0 1 2018-12-12 05:19:30.190571: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1001] 0: N Y 2018-12-12 05:19:30.190577: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1001] 1: Y N 2018-12-12 05:19:30.190890: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 10407 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1080 Ti, pci bus id: 0000:02:00.0, compute capability: 6.1) 2018-12-12 05:19:30.191239: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:1 with 10405 MB memory) -> physical GPU (device: 1, name: GeForce GTX 1080 Ti, pci bus id: 0000:65:00.0, compute capability: 6.1) 2018-12-12 05:19:30.400938: W tensorflow/core/graph/graph_constructor.cc:1265] Importing a graph with a lower producer version 26 into an existing graph with producer version 27. Shape inference will have run different parts of the graph with different producer versions. INFO:tensorflow:Restoring parameters from models/best_model.ckpt INFO:tensorflow:data format: NHWC INFO:tensorflow:input: net_input:0 / output: net_output:0 INFO:tensorflow:input's shape: (?, 32, 32, 3) INFO:tensorflow:output's shape: (128, 10) 2018-12-12 05:19:31.221600: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1511] Adding visible gpu devices: 0, 1 2018-12-12 05:19:31.221711: I tensorflow/core/common_runtime/gpu/gpu_device.cc:982] Device interconnect StreamExecutor with strength 1 edge matrix: 2018-12-12 05:19:31.221721: I tensorflow/core/common_runtime/gpu/gpu_device.cc:988] 0 1 2018-12-12 05:19:31.221728: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1001] 0: N Y 2018-12-12 05:19:31.221734: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1001] 1: Y N 2018-12-12 05:19:31.221946: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 10407 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1080 Ti, pci bus id: 0000:02:00.0, compute capability: 6.1) 2018-12-12 05:19:31.222163: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:1 with 10405 MB memory) -> physical GPU (device: 1, name: GeForce GTX 1080 Ti, pci bus id: 0000:65:00.0, compute capability: 6.1) INFO:tensorflow:Restoring parameters from models/best_model.ckpt INFO:tensorflow:Froze 158 variables. INFO:tensorflow:Converted 158 variables to const ops. INFO:tensorflow:models/model_original.pb generated 2018-12-12 05:19:31.621374: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1511] Adding visible gpu devices: 0, 1 2018-12-12 05:19:31.621474: I tensorflow/core/common_runtime/gpu/gpu_device.cc:982] Device interconnect StreamExecutor with strength 1 edge matrix: 2018-12-12 05:19:31.621484: I tensorflow/core/common_runtime/gpu/gpu_device.cc:988] 0 1 2018-12-12 05:19:31.621491: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1001] 0: N Y 2018-12-12 05:19:31.621502: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1001] 1: Y N 2018-12-12 05:19:31.621721: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 10407 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1080 Ti, pci bus id: 0000:02:00.0, compute capability: 6.1) 2018-12-12 05:19:31.621927: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:1 with 10405 MB memory) -> physical GPU (device: 1, name: GeForce GTX 1080 Ti, pci bus id: 0000:65:00.0, compute capability: 6.1) INFO:tensorflow:input: import/net_input:0 / output: import/net_output:0 INFO:tensorflow:outputs from the .pb model: [[0.08655829 0.07333893 0.12068225 ... 0.10470694 0.04381043 0.07987908] [0.07518315 0.061992 0.1280444 ... 0.10132682 0.04146371 0.07332663] [0.07436378 0.06090009 0.1210477 ... 0.10350394 0.03192139 0.07476252] ... [0.07662182 0.0614543 0.12784666 ... 0.09769606 0.03937666 0.07405204] [0.0794882 0.06774912 0.11428031 ... 0.107343 0.04015093 0.07813986] [0.09292391 0.07135639 0.12207921 ... 0.10794424 0.04035655 0.07984835]] INFO:tensorflow:time consumption of .pb model: 6.60 ms INFO:tensorflow:models/model_original.pb -> models/model_original.tflite WARNING:tensorflow:From tools/conversion/export_chn_pruned_tflite_model.py:107: TocoConverter.from_frozen_graph (from tensorflow.contrib.lite.python.lite) is deprecated and will be removed in a future version. Instructions for updating: Use lite.TFLiteConverter.from_frozen_graph instead. 2018-12-12 05:19:33.783311: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1511] Adding visible gpu devices: 0, 1 2018-12-12 05:19:33.783492: I tensorflow/core/common_runtime/gpu/gpu_device.cc:982] Device interconnect StreamExecutor with strength 1 edge matrix: 2018-12-12 05:19:33.783521: I tensorflow/core/common_runtime/gpu/gpu_device.cc:988] 0 1 2018-12-12 05:19:33.783537: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1001] 0: N Y 2018-12-12 05:19:33.783552: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1001] 1: Y N 2018-12-12 05:19:33.783995: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 10407 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1080 Ti, pci bus id: 0000:02:00.0, compute capability: 6.1) 2018-12-12 05:19:33.784297: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:1 with 10405 MB memory) -> physical GPU (device: 1, name: GeForce GTX 1080 Ti, pci bus id: 0000:65:00.0, compute capability: 6.1) Traceback (most recent call last): File "tools/conversion/export_chn_pruned_tflite_model.py", line 386, in tf.app.run() File "/root/anaconda3/lib/python3.6/site-packages/tensorflow/python/platform/app.py", line 125, in run _sys.exit(main(argv)) File "tools/conversion/export_chn_pruned_tflite_model.py", line 372, in main export_pb_tflite_model(net, file_path_meta, file_path_pb, file_path_tflite, edit_graph=False) File "tools/conversion/export_chn_pruned_tflite_model.py", line 341, in export_pb_tflite_model convert_pb_model_to_tflite(file_path_pb, file_path_tflite, net['input_name'], net['output_name']) File "tools/conversion/export_chn_pruned_tflite_model.py", line 108, in convert_pb_model_to_tflite tflite_model = converter.convert() File "/root/anaconda3/lib/python3.6/site-packages/tensorflow/contrib/lite/python/lite.py", line 453, in convert **converter_kwargs) File "/root/anaconda3/lib/python3.6/site-packages/tensorflow/contrib/lite/python/convert.py", line 342, in toco_convert_impl input_data.SerializeToString()) File "/root/anaconda3/lib/python3.6/site-packages/tensorflow/contrib/lite/python/convert.py", line 135, in toco_convert_protos (stdout, stderr)) RuntimeError: TOCO failed see console for info. b'2018-12-12 05:19:38.212845: I tensorflow/core/platform/cpu_feature_guard.cc:141] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 AVX512F FMA\n2018-12-12 05:19:38.336372: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1432] Found device 0 with properties: \nname: GeForce GTX 1080 Ti major: 6 minor: 1 memoryClockRate(GHz): 1.683\npciBusID: 0000:02:00.0\ntotalMemory: 10.92GiB freeMemory: 270.50MiB\n2018-12-12 05:19:38.445518: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1432] Found device 1 with properties: \nname: GeForce GTX 1080 Ti major: 6 minor: 1 memoryClockRate(GHz): 1.683\npciBusID: 0000:65:00.0\ntotalMemory: 10.91GiB freeMemory: 10.55GiB\n2018-12-12 05:19:38.445649: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1511] Adding visible gpu devices: 0, 1\n2018-12-12 05:19:38.923289: I tensorflow/core/common_runtime/gpu/gpu_device.cc:982] Device interconnect StreamExecutor with strength 1 edge matrix:\n2018-12-12 05:19:38.923339: I tensorflow/core/common_runtime/gpu/gpu_device.cc:988] 0 1 \n2018-12-12 05:19:38.923345: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1001] 0: N Y \n2018-12-12 05:19:38.923350: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1001] 1: Y N \n2018-12-12 05:19:38.923586: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 206 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1080 Ti, pci bus id: 0000:02:00.0, compute capability: 6.1)\n2018-12-12 05:19:38.923903: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:1 with 10203 MB memory) -> physical GPU (device: 1, name: GeForce GTX 1080 Ti, pci bus id: 0000:65:00.0, compute capability: 6.1)\n2018-12-12 05:19:38.951782: I tensorflow/contrib/lite/toco/graph_transformations/graph_transformations.cc:39] Before Removing unused ops: 616 operators, 935 arrays (0 quantized)\n2018-12-12 05:19:38.966621: I tensorflow/contrib/lite/toco/graph_transformations/graph_transformations.cc:39] Before general graph transformations: 616 operators, 935 arrays (0 quantized)\n2018-12-12 05:19:38.980659: W tensorflow/contrib/lite/toco/graph_transformations/resolve_constant_random_uniform.cc:85] RandomUniform op outputting "model/MobilenetV2/Logits/Dropout/dropout_1/random_uniform/RandomUniform" is truly random (using /dev/random system entropy). Therefore, cannot resolve as constant. Set "seed" or "seed2" attr non-zero to fix this\n2018-12-12 05:19:38.981410: I tensorflow/contrib/lite/toco/graph_transformations/graph_transformations.cc:39] After general graph transformations pass 1: 71 operators, 181 arrays (0 quantized)\n2018-12-12 05:19:38.981632: W tensorflow/contrib/lite/toco/graph_transformations/resolve_constant_random_uniform.cc:85] RandomUniform op outputting "model/MobilenetV2/Logits/Dropout/dropout_1/random_uniform/RandomUniform" is truly random (using /dev/random system entropy). Therefore, cannot resolve as constant. Set "seed" or "seed2" attr non-zero to fix this\n2018-12-12 05:19:38.982325: I tensorflow/contrib/lite/toco/graph_transformations/graph_transformations.cc:39] Before dequantization graph transformations: 71 operators, 181 arrays (0 quantized)\n2018-12-12 05:19:38.984033: I tensorflow/contrib/lite/toco/allocate_transient_arrays.cc:345] Total transient array allocated size: 1322240 bytes, theoretical optimal value: 1315840 bytes.\n2018-12-12 05:19:38.984225: I tensorflow/contrib/lite/toco/toco_tooling.cc:397] Estimated count of arithmetic ops: 0.0160539 billion (note that a multiply-add is counted as 2 ops).\n2018-12-12 05:19:38.984420: F tensorflow/contrib/lite/toco/tflite/export.cc:386] Some of the operators in the model are not supported by the standard TensorFlow Lite runtime. If you have a custom implementation for them you can disable this error with --allow_custom_ops, or by setting allow_custom_ops=True when calling tf.contrib.lite.TFLiteConverter(). Here is a list of operators for which you will need custom implementations: RandomUniform.\nAborted (core dumped)\n'

jiaxiang-wu commented 5 years ago
  1. MobileNet-v2 at CIFAR-10? We do not have such (model, dataset) combination. Is it your self-defined model?
  2. Which model compression learner are you using, ChannelPrunedLearner?
  3. Which checkpoint files are you using, from the training graph or the evaluation graph?
jiaxiangxu commented 5 years ago
  1. MobileNet-v2 at CIFAR-10? We do not have such (model, dataset) combination. Is it your self-defined model?
  2. Which model compression learner are you using, ChannelPrunedLearner?
  3. Which checkpoint files are you using, from the training graph or the evaluation graph?
  1. No, it's not a self-defined model. I just created mobilenet_at_cifar10_run.py and mobilenet_at_cifar10.py to leverage the built-in mobilenet and cifar10 code.

  2. Yes, ChannelPrunedLearner. parameter log: Python script: ./nets/mobilenet_at_cifar10_run.py # of GPUs: 1 extra arguments: --mobilenet_version 2 --mobilenet_depth_mult 1.0 --learner channel --batch_size_eval 128 --cp_prune_option uniform --cp_uniform_preserve_ratio 0.5 --model_http_url https://api.ai.tencent.com/pocketflow --data_dir_local /workspace/dataset/cifar10_data/cifar-10-batches-bin './nets/mobilenet_at_cifar10_run.py' -> 'main.py' multi-GPU training disabled INFO:tensorflow:FLAGS: INFO:tensorflow:data_disk: local INFO:tensorflow:data_hdfs_host: None INFO:tensorflow:data_dir_local: /workspace/dataset/cifar10_data/cifar-10-batches-bin INFO:tensorflow:data_dir_hdfs: None INFO:tensorflow:cycle_length: 4 INFO:tensorflow:nb_threads: 8 INFO:tensorflow:buffer_size: 1024 INFO:tensorflow:prefetch_size: 8 INFO:tensorflow:nb_classes: 10 INFO:tensorflow:nb_smpls_train: 50000 INFO:tensorflow:nb_smpls_val: 5000 INFO:tensorflow:nb_smpls_eval: 10000 INFO:tensorflow:batch_size: 128 INFO:tensorflow:batch_size_eval: 128 INFO:tensorflow:mobilenet_version: 2 INFO:tensorflow:mobilenet_depth_mult: 1.0 INFO:tensorflow:nb_epochs_rat: 1.0 INFO:tensorflow:lrn_rate_init: 0.045 INFO:tensorflow:batch_size_norm: 96.0 INFO:tensorflow:momentum: 0.9 INFO:tensorflow:loss_w_dcy: 4e-05 INFO:tensorflow:model_http_url: https://api.ai.tencent.com/pocketflow INFO:tensorflow:summ_step: 100 INFO:tensorflow:save_step: 10000 INFO:tensorflow:save_path: ./models/model.ckpt INFO:tensorflow:save_path_eval: ./models_eval/model.ckpt INFO:tensorflow:enbl_dst: False INFO:tensorflow:enbl_warm_start: False INFO:tensorflow:loss_w_dst: 4.0 INFO:tensorflow:tempr_dst: 4.0 INFO:tensorflow:save_path_dst: ./models_dst/model.ckpt INFO:tensorflow:ddpg_actor_depth: 2 INFO:tensorflow:ddpg_actor_width: 64 INFO:tensorflow:ddpg_critic_depth: 2 INFO:tensorflow:ddpg_critic_width: 64 INFO:tensorflow:ddpg_noise_type: param INFO:tensorflow:ddpg_noise_prtl: tdecy INFO:tensorflow:ddpg_noise_std_init: 1.0 INFO:tensorflow:ddpg_noise_dst_finl: 0.01 INFO:tensorflow:ddpg_noise_adpt_rat: 1.03 INFO:tensorflow:ddpg_noise_std_finl: 1e-05 INFO:tensorflow:ddpg_rms_eps: 0.0001 INFO:tensorflow:ddpg_tau: 0.01 INFO:tensorflow:ddpg_gamma: 0.9 INFO:tensorflow:ddpg_lrn_rate: 0.001 INFO:tensorflow:ddpg_loss_w_dcy: 0.0 INFO:tensorflow:ddpg_record_step: 1 INFO:tensorflow:ddpg_batch_size: 64 INFO:tensorflow:ddpg_enbl_bsln_func: True INFO:tensorflow:ddpg_bsln_decy_rate: 0.95 INFO:tensorflow:ws_save_path: ./models_ws/model.ckpt INFO:tensorflow:ws_prune_ratio: 0.75 INFO:tensorflow:ws_prune_ratio_prtl: optimal INFO:tensorflow:ws_nb_rlouts: 200 INFO:tensorflow:ws_nb_rlouts_min: 50 INFO:tensorflow:ws_reward_type: single-obj INFO:tensorflow:ws_lrn_rate_rg: 0.03 INFO:tensorflow:ws_nb_iters_rg: 20 INFO:tensorflow:ws_lrn_rate_ft: 0.0003 INFO:tensorflow:ws_nb_iters_ft: 400 INFO:tensorflow:ws_nb_iters_feval: 25 INFO:tensorflow:ws_prune_ratio_exp: 3.0 INFO:tensorflow:ws_iter_ratio_beg: 0.1 INFO:tensorflow:ws_iter_ratio_end: 0.5 INFO:tensorflow:ws_mask_update_step: 500.0 INFO:tensorflow:cp_lasso: True INFO:tensorflow:cp_quadruple: False INFO:tensorflow:cp_reward_policy: accuracy INFO:tensorflow:cp_nb_points_per_layer: 10 INFO:tensorflow:cp_nb_batches: 30 INFO:tensorflow:cp_prune_option: uniform INFO:tensorflow:cp_prune_list_file: ratio.list INFO:tensorflow:cp_channel_pruned_path: ./models/pruned_model.ckpt INFO:tensorflow:cp_best_path: ./models/best_model.ckpt INFO:tensorflow:cp_original_path: ./models/original_model.ckpt INFO:tensorflow:cp_preserve_ratio: 0.5 INFO:tensorflow:cp_uniform_preserve_ratio: 0.5 INFO:tensorflow:cp_noise_tolerance: 0.15 INFO:tensorflow:cp_lrn_rate_ft: 0.0001 INFO:tensorflow:cp_nb_iters_ft_ratio: 0.2 INFO:tensorflow:cp_finetune: False INFO:tensorflow:cp_retrain: False INFO:tensorflow:cp_list_group: 1000 INFO:tensorflow:cp_nb_rlouts: 200 INFO:tensorflow:cp_nb_rlouts_min: 50 INFO:tensorflow:cpg_save_path: ./models_cpg/model.ckpt INFO:tensorflow:cpg_save_path_eval: ./models_cpg_eval/model.ckpt INFO:tensorflow:cpg_prune_ratio_type: uniform INFO:tensorflow:cpg_prune_ratio: 0.5 INFO:tensorflow:cpg_skip_ht_layers: True INFO:tensorflow:cpg_prune_ratio_file: None INFO:tensorflow:cpg_lrn_rate_pgd_init: 1e-10 INFO:tensorflow:cpg_lrn_rate_pgd_incr: 1.4 INFO:tensorflow:cpg_lrn_rate_pgd_decr: 0.7 INFO:tensorflow:cpg_lrn_rate_adam: 0.01 INFO:tensorflow:cpg_nb_iters_layer: 1000 INFO:tensorflow:dcp_save_path: ./models_dcp/model.ckpt INFO:tensorflow:dcp_save_path_eval: ./models_dcp_eval/model.ckpt INFO:tensorflow:dcp_prune_ratio: 0.5 INFO:tensorflow:dcp_nb_stages: 3 INFO:tensorflow:dcp_lrn_rate_adam: 0.001 INFO:tensorflow:dcp_nb_iters_block: 10000 INFO:tensorflow:dcp_nb_iters_layer: 500 INFO:tensorflow:uql_equivalent_bits: 4 INFO:tensorflow:uql_nb_rlouts: 200 INFO:tensorflow:uql_w_bit_min: 2 INFO:tensorflow:uql_w_bit_max: 8 INFO:tensorflow:uql_tune_layerwise_steps: 100 INFO:tensorflow:uql_tune_global_steps: 2000 INFO:tensorflow:uql_tune_save_path: ./rl_tune_models/model.ckpt INFO:tensorflow:uql_tune_disp_steps: 300 INFO:tensorflow:uql_enbl_random_layers: True INFO:tensorflow:uql_enbl_rl_agent: False INFO:tensorflow:uql_enbl_rl_global_tune: True INFO:tensorflow:uql_enbl_rl_layerwise_tune: False INFO:tensorflow:uql_weight_bits: 4 INFO:tensorflow:uql_activation_bits: 32 INFO:tensorflow:uql_use_buckets: False INFO:tensorflow:uql_bucket_size: 256 INFO:tensorflow:uql_quant_epochs: 60 INFO:tensorflow:uql_save_quant_model_path: ./uql_quant_models/uql_quant_model.ckpt INFO:tensorflow:uql_quantize_all_layers: False INFO:tensorflow:uql_bucket_type: channel INFO:tensorflow:uqtf_save_path: ./models_uqtf/model.ckpt INFO:tensorflow:uqtf_save_path_eval: ./models_uqtf_eval/model.ckpt INFO:tensorflow:uqtf_weight_bits: 8 INFO:tensorflow:uqtf_activation_bits: 8 INFO:tensorflow:uqtf_quant_delay: 0 INFO:tensorflow:uqtf_freeze_bn_delay: None INFO:tensorflow:uqtf_lrn_rate_dcy: 0.01 INFO:tensorflow:nuql_equivalent_bits: 4 INFO:tensorflow:nuql_nb_rlouts: 200 INFO:tensorflow:nuql_w_bit_min: 2 INFO:tensorflow:nuql_w_bit_max: 8 INFO:tensorflow:nuql_tune_layerwise_steps: 100 INFO:tensorflow:nuql_tune_global_steps: 2101 INFO:tensorflow:nuql_tune_save_path: ./rl_tune_models/model.ckpt INFO:tensorflow:nuql_tune_disp_steps: 300 INFO:tensorflow:nuql_enbl_random_layers: True INFO:tensorflow:nuql_enbl_rl_agent: False INFO:tensorflow:nuql_enbl_rl_global_tune: True INFO:tensorflow:nuql_enbl_rl_layerwise_tune: False INFO:tensorflow:nuql_init_style: quantile INFO:tensorflow:nuql_opt_mode: weights INFO:tensorflow:nuql_weight_bits: 4 INFO:tensorflow:nuql_activation_bits: 32 INFO:tensorflow:nuql_use_buckets: False INFO:tensorflow:nuql_bucket_size: 256 INFO:tensorflow:nuql_quant_epochs: 60 INFO:tensorflow:nuql_save_quant_model_path: ./nuql_quant_models/model.ckpt INFO:tensorflow:nuql_quantize_all_layers: False INFO:tensorflow:nuql_bucket_type: split INFO:tensorflow:log_dir: ./logs INFO:tensorflow:enbl_multi_gpu: False INFO:tensorflow:learner: channel INFO:tensorflow:exec_mode: train INFO:tensorflow:debug: False INFO:tensorflow:h: False INFO:tensorflow:help: False INFO:tensorflow:helpfull: False INFO:tensorflow:helpshort: False

  3. models/best_model.ckpt, it should be an evaluation graph?

jiaxiang-wu commented 5 years ago

Can you post all files starting with "best_model" under the "models" directory, so that we can reproduce your issue?

jiaxiangxu commented 5 years ago

Can you post all files starting with "best_model" under the "models" directory, so that we can reproduce your issue?

best_model.ckpt.data-00000-of-00001 best_model.ckpt.index best_model.ckpt.meta

jiaxiang-wu commented 5 years ago

@jiaxiangxu Sorry for the confusion. I mean the actual files, not just file names.

jiaxiang-wu commented 5 years ago

@jiaxiangxu Can you provide the actual model files, instead of just file names?

jiaxiangxu commented 5 years ago

@jiaxiangxu Can you provide the actual model files, instead of just file names?

best_models.zip

jiaxiang-wu commented 5 years ago

Got it. Let us try to reproduce your issue.

smalltingting commented 5 years ago

Have you solved this problem? I have the same problem as you.

ZhanPython commented 5 years ago

Hi, have you solved this issue? I have one similar problem.

mldlli commented 5 years ago

I have the same problem, while i use the inception v3 model, tf files are transered to .pb successfully, but failed during from .pb to .tflite