tensorflow / lucid

A collection of infrastructure and tools for research in neural network interpretability.
Apache License 2.0
4.65k stars 655 forks source link

model.save() - Froze 0 variables. Converted 0 variables to const ops. #215

Closed camoconnell closed 4 years ago

camoconnell commented 4 years ago

Hi there,

I'm working on saving a model out of the style transfer example on seedbank.

I have managed to generate a saved model by wrapping the with the following code;

init_g = tf.global_variables_initializer()   
with tf.Session() as sess:

  sess.run(init_g)

  param_f = lambda: style_transfer_param(content_image, style_image)

  content_obj = 100 * activation_difference(content_layers, difference_to=CONTENT_INDEX)
  content_obj.description = "Content Loss"

  style_obj = activation_difference(style_layers, transform_f=gram_matrix, difference_to=STYLE_INDEX)
  style_obj.description = "Style Loss" 

  objective = - content_obj - style_obj 

  vis = render.render_vis(model, objective, param_f=param_f, thresholds=[40], verbose=False, print_objectives=[content_obj, style_obj])[-1]

  model.save(
    "saved_model.pb",
    input_name='Variable',
    image_shape=[224, 224, 3],
    output_names=['random_crop_1/Assert/Assert'],
    image_value_range=[-117, 138],
  )

however, judging by the logs below, it appears to be exporting an empty saved_model.pb

INFO:tensorflow:Froze 0 variables.
INFO:tensorflow:Converted 0 variables to const ops.

Should i be running sess.run() in the gram_matrix or activation_difference functions to populate the graph?

thanks

colah commented 4 years ago

Hello!

I'm not sure that I'm following the goals of this.

Normally, you'd have a one script construct and export your model, then import it in another script for visualization.

merri-ment commented 4 years ago

Hi Colah,

Apologies, excuse the naive questions, i'm still getting familiar (conceptually) with how Style Transfer works.

The goal is to generate a saved_model.pb from the Style Transfer example and convert it via tensorflowjs_converter to the json web format, so i can run it in the browser. Therefore the flow is;

To answer your first two points; In the example we create an instance of the InceptionV1 model and retrain it. Having read the "Importing Models into Lucid" documentation on GH i had assumed that the code to retrain ( construct the model inference graph ) was, as per the snippet above;

param_f = lambda: style_transfer_param(content_image, style_image)
...
vis = render.render_vis(model, objective, param_f=param_f, thresholds=[40], verbose=False, print_objectives=[content_obj, style_obj])[-1]

Hence why i went about wrapping the code in a session and calling model.save() after render.render_vis().

edit : sorry , just realised i'm logged in with my other account, this is camoconnell

colah commented 4 years ago

You might want to look at Interactive Stefan Sietzen and Manuela Waldner work on Feature Visualization in the Browser.

Unfortunately, exporting a visualization graph like this isn't a supported use of lucid, and you're likely to run into a number of issues:

I would recommend using tfjs to run your model, and implementing style transfer using Stefan's code or from scratch in javascript.

camoconnell commented 4 years ago

Thanks for taking the time to reply,

Ok, makes sense. The 'Feature Visualization in the Browser' link looks incredible.

Will be in contact if i get an example running in tfjs using a Lucid trained model.