google / prettytensor

Pretty Tensor: Fluent Networks in TensorFlow
1.24k stars 151 forks source link

loading from checkpoint #51

Open AdrianLsk opened 7 years ago

AdrianLsk commented 7 years ago

Hi, can you please clarify how to use the saved model files if I want to load the model from a checkpoint?

Here is my code:

# mock input
mock_input = np.ones(input_shape)

# build model
accuracy, cost, inference_input, label_tensor, inferences, train_op = \
    build_network(input_shape, learning_rate, specs, pt.Phase.test)

# set gpu and config options
gpu_options = \
    tf.GPUOptions(allow_growth=True, per_process_gpu_memory_fraction=.9)
config = \
    tf.ConfigProto(allow_soft_placement=True, gpu_options=gpu_options,
                   log_device_placement=True)

# load from checkpoint
model_ckpt = './models/first/-3036.data-00000-of-00001'
runner = pt.train.Runner(initial_checkpoint=model_ckpt)

with tf.Session(config=config), tf.device('/gpu:2'):
        predictions = runner.run_model(
            op_list=[inferences], num_steps=1,
            feed_vars=(inference_input,), print_every=0,
            feed_data=[(mock_input,)])

The inspection of the three saved files:

I thought that all variables are saved during training. what am I doing wrong?