Open jmgilmer opened 6 years ago
Thanks for your thoughtful review! We’re glad you found it interesting. :) We think all of your feedback is absolutely correct, and have made a number of changes to address it.
One way to get a deeper understanding of the inner workings of a network is to optimize convex combinations of neurons. This was not completely clear to me and the reference that the authors give here just points to a wikipedia article about what a convex combination means. Perhaps a better reference here might be to a network visualization paper. … Also the first section is a little confusing (maybe because I am not familiar with convex combination feature visualization)
Agreed! This section assumed a lot of prior knowledge about visualizing neuron interaction. We rewrote the section to address this in PR #64. We also explicitly link to the much longer discussion of these ideas in Feature Visualization, as you suggest. We also no longer discuss things in terms of convex combinations.
The transition from the introduction to the first section is a little abrupt.
Agreed! We extended the introduction with a section discussing how different parameterization effects the results (PR #53), and also rewrote the first section to flow better and assume less prior knowledge (PR #64).
"adjustign" should be "adjusting"
Fixed!
The following peer review was solicited as part of the Distill review process. The review was formatted by the editor to help with readability.
The reviewer chose to keep anonymity. Distill offers reviewers a choice between anonymous review and offering reviews under their name. Non-anonymous review allows reviewers to get credit for the service them offer to the community.
Distill is grateful to the reviewer for taking the time to write the review.
This article presents an interesting perspective on differentiable image transformations and their underappreciated usefulness for neural network visualization and neural art. Several neural art generators like Deep dream use the fact that the neural network is a differentiable function of the image and therefore one can backprop into the pixel space to maximize some desired objective. However, the authors point out that an RGB description is not the only way to parametrize an image and using alternative parametrizations can often have unexpected benefits!
The first area that the authors study that can benefit from alternative parametrization is neural network visualization. One way to get a deeper understanding of the inner workings of a network is to optimize convex combinations of neurons. This was not completely clear to me and the reference that the authors give here just points to a wikipedia article about what a convex combination means. Perhaps a better reference here might be to a network visualization paper. The authors here point out that using a shared parametrization between different frames can help the features remained aligned between interpolations thereby helping visualization.
The other area benefiting from alternate parametrizations is neural art. The authors mention an interesting observation that I was not aware of: that style transfer works mostly with the VGG network even though other classification networks perform at par or better than the VGG in classification. The authors then show that by parametrizing the image in the Fourier basis, one can get similar results on style transfer using the Google LeNet. This is a very interesting result and one that I would love to see analyzed more (perhaps for future work). The authors then study Compositional Pattern Producing Networks (CPPNs), where the parametrization is a differentiable parametrization using a neural network. Several beautiful pictures generated using this method are presented. The authors then move on to 3d visual art using rendering, where the parametrization is a Fourier parametrization for the textures. The authors also describe the UV mapping, where every vertex of the triangulation gets a coordinate in the mesh.
The gradients are applied in a two stage process, once to the mesh to get the desired style transfer, and then back to the renderer description. Using this the authors are able to generate nice 3d style transfers which look very cool!
Typos and suggestions etc: