I have successfully trained a model and want to convert the last checkpoint model ( .meta, .index outptuts) to frozen model (.pb file).
with tf.Session(config=tf.ConfigProto(allow_soft_placement=True, log_device_placement=True)) as sess:
sess.run(tf.global_variables_initializer())
# Restore the graph
saver = tf.train.import_meta_graph(meta_path)
# Load weights
saver.restore(sess,tf.train.latest_checkpoint('/home/master/small_dt'))
output_node_names = [n.name for n in tf.get_default_graph().as_graph_def().node]
frozen_graph_def = tf.graph_util.convert_variables_to_constants(
sess,
sess.graph_def,
output_node_names)
# Save the frozen graph
with open('output_graph.pb', 'wb') as f:
f.write(frozen_graph_def.SerializeToString())
graph_io.write_graph(frozen, './', 'inference_graph.pb', as_text=False)
everytime i run my code.
There seems to be a problem with the model when trying to restore it.
The variable "count_warning" - which seems to be an output node - is not initialized.
I have successfully trained a model and want to convert the last checkpoint model ( .meta, .index outptuts) to frozen model (.pb file).
everytime i run my code. There seems to be a problem with the model when trying to restore it. The variable "count_warning" - which seems to be an output node - is not initialized.
How can i fix this issue?!